Monday, December 22, 2014

Getting Things Done: Dealing With Email

Introduction

This topic is a departure from my usual posts about information technology.  Instead of talking specific software packages or troubleshooting I am going to share a quick, simple method of dealing with email.  Email is something we deal with in great quantities: promotions, bill notifications, wedding announcements, work etc.  Email presents us with a flood of information and we usually deal with it poorly.

We are lost staring at thousands of messages.  Some of us use email as a to-do list only to have it turn into a graveyard of forgotten ideas and tasks.  The rest of us are constantly distracted by the audible and vibratory notification of our device for every email we receive.  Stop the madness, get organized and get focused with the following quick steps to managing your Inbox.

Lost

Staring at 2000+ messages in my inbox I was lost and not sure what was important.  I use Gmail and over the years Google has introduced features to help the important messages bubble up to increase visibility.  Importance tags, the "Promotion", "Social" tags all helped but I was still missing something.  I found it impossible to wade through a list of 2000+ messages and decide what is important.  The psychological weight of those messages was dragging me down.

Repeat after me: Email is not a to-do list

Problem number one was using my inbox as a to-do list.  When I needed to act on an email at some point in the future I left it in my inbox as a reminder.  This only increased the number of emails and to-do's in my Inbox.  Wading through things to read and reminders was not working.  More often than not items I needed to act on were forgotten.

You Are Distracted

Problem number two was the frequency I checked my email.  Some people like to have their phone bling, vibrate and buzz every time they receive a new message.  "I might miss something important", comes the cry.  The constant interruption is distracting and science has shown productive people who are uninterrupted do better on tasks.  Those productive people not doing more but doing it better by eliminating distractions and focusing on the task at hand.  Not checking your email allows you to focus and work smarter.

Inbox Zero

Enter Inbox Zero, the strategy and idea of an empty inbox.  Your inbox will never be empty.  As you are reading this messages are queueing at the steady pace of modern life.  There is a way to manage your inbox, to clear out all messages and then return to work.

First set a specific number of times per day to check email.  Mark those times in your calendar if you need a reminder.  I prefer four times a day: morning, noon, late afternoon and evening.  Check it when you get into work, check it before lunch and then when you leave for the day.  You might want to check it sometime before bed to deal with personal stuff.  Define your schedule and set a specific number of times to check email that works for you.

You will need to communicate to your co-workers that email is not an appropriate medium for discussing emergency or urgent situations.  When they complain you did not immediately respond to their urgent missive regarding the production server down for an hour, kindly suggest they call your cell phone next time.  They will learn and adjust their behavior accordingly.

Second, turn off notifications: Desktop notifications, phone, tablet, whatever device.  Turn off all the email notifications.  Turn off the LEDs notifications too!  Science has shown we get distracted by blinky things.  Someone will call you if it is an emergency.  You will experience a wave of peace and calm as the devices go silent and dark.

Thousands Of Emails

There is no efficient way to deal with the thousands of emails already in your Inbox, not unless you want to spend a few weeks of "drag and drop".  But before we can effectively manage our Inbox we need to clear it.  The thousands of email Inbox is psychological dead weight preventing focus.

First create a folder called Archive that will hold all messages we act on.  Sort the messages in the Inbox chronologically and go back two weeks.  If there are any emails you need to act on then go back another week.  Keep going back until there are no more actionable items.  Chances are anything that is a month old is past due and cannot be acted on.  Select all the messages from that point and move them into your Archive.

You should be left with a few hundred emails that need to be read or acted on.

Managing The Flood

Now you have a hundred messages or less to deal with three or four times a day.  If you are savvy and implemented filters those messages are cataloged in a few different folders.

What follows is a personal method, adapted from Getting Things Done (GTD) and The Secret Weapon.

  1. Create a view or search option to see all Unread messages all at once.  I like to see and act on everything.  Folders give email context but I want to deal with all Unread messages.
  2. Scan the message subjects and senders.  The subject and sender will indicate if you need to read the email.
  3. Select all the ones you are not interested in reading.
  4. Mark them as read and move them to the Archive folder.
  5. Read the remaining messages and decide immediately:
    1. Is this a 5 minute reply?  Yes, reply right now.
    2. Does this require an action on my part?  Schedule it in your calendar right now.
    3. Does this require a well thought out response?  Forward to Evernote or to-do list for follow up.
    4. No action required? Read and move to Archive.  These emails provide "situational awareness" but do not trigger any action on your part.
Note on Step 5.4: GTD has a "Cabinet" or "Digital Library" concept, a place for articles, reference, and supporting material for projects.  I receive email containing information for active projects such as: instructions, stakeholder contacts, and meeting notes.  I forward those emails to Evernote and associate with the project for easy retrieval.

Conclusion

Do not allow your Inbox to become psychological dead weight, dragging you down every day. Turn off those notifications to maintain focus and you will be more productive.  Manage the information in an intelligent manner and act on all email at specific intervals throughout the day.  Reply quickly if it will take less than five minutes, send to Evernote to do later, read messages for situational awareness, and make calendar appointments for commitments.  Archive all email for retrieval later.

This will provide you with greater focus and, in turn, productivity.  It is one of many steps on the path to GTD.  Good luck.

Tuesday, September 30, 2014

Practical Puppet Development Using Vagrant and Jenkins

Preface

I don't consider myself a software developer and I have no practical experience, training or formal education in software development (save one C class I took late 1990's).

Recently I started to write puppet code to provision applications.  I wasn't writing and deploying code fast enough and making a lot of mistakes.  I wanted to go faster and make fewer mistakes.  To accomplish those goals I needed to automate the testing, deployment and development environment processes.

I adopted the tools (Jenkins, Vagrant and Git) of the other developers around me, incorporated other tools (r10k, puppet-lint) specific to puppet, and developed a workflow.

This post is not a technology deep dive nor will I touch on the deployment process.  This is about developing a workflow with which you can write good code and a build consistent development environment.

Everyone wants to write good code but why care about the development environment?  A consistent development environment allows your peer(s) to review the code before pushing it into qa or production.  Your peers have 100% assurance their development environment is exactly the same as yours.  It is important to have identical environments to ensure all tests are repeatable and have the same outcome.  In my opinion, it is even better if your development environment is identical or nearly identical to production.

Workflow

My workflow for writing and testing code consists of these steps: 
  1. Developer writes code on local workstation.
  2. Developer commits code to centralized repository via version control system.
  3. Developer's version control tool runs a pre-commit hook for syntax and whitespace check. Commit is ignored if these tests exit with an error.
  4. Successful commit is received by centralized version control repository.
  5. Centralized version control repository notifies orchestration tool of commit.
  6. Orchestration tool performs additional testing.
  7. Developer provisions a new or re-provisions an existing virtual machine.
  8. The automated provisioner starts a new virtual machine using the local hypervisor.
  9. The automated provisioner runs the automated code installer.
  10. The automated code installer downloads any required code for the virtual machine.
  11. The provisioner provisions the virtual machine with the required code.
  12. The virtual machine is ready for testing.

Technologies

The specific technologies don't matter.  You could weave together other technologies that serve the same purpose and still use the same workflow.  Here are the technologies I use and what they provide:
  • VirtualBox: local virtualization, on the developer workstation
  • Vagrant: automated provisioning for virtual machines, bootstraps provisioner
  • Puppet: provisioning for the virtual machines
  • r10k: automated Puppet code installation
  • Jenkins: automation/orchestration tool to automate testing of Puppet code
  • Gitlab: centralized version control repository
  • git: version control system
  • puppet-lint: syntax check for code
  • puppet parser validate: checks for valid code.
I've re-written my workflow, identifying each technology at each step.
  1. I write code on my workstation.
  2. I commit the code to Gitlab using git.
  3. I have a pre-commit hook that runs puppet-lint and a whitespace check.
  4. My commit is ignored if those tests fail.
  5. On a successful commit, Gitlab notifies Jenkins via a webhook to run a job.
  6. Jenkins runs the job and performs additional testing.
  7. I provision a new or re-provision an existing virtual machine on my workstation.
  8. Vagrant handles the virtual machine provisioning and spins up a virtual machine using VirtualBox.
  9. Vagrant r10k module downloads any required Puppet code from either Puppet Forge or Gitlab.
  10. Vagrant runs the Puppet provisioner after the virtual machine has started.
  11. Puppet provisions the machine according to the virtual machine's manifest and using the code r10k downloaded.

Put It All Together

So how to glue all those technologies together?  Let's walk through each step and outline what features are used to bring them together.

Step 1 - Writing code.

Be comfortable writing code on your workstation.  I chose vim to write the majority of my code because I'm usually in a command line all day.  I installed a vim module specifically for puppet.  It helps enforce syntax and does some autocompletion.

For Vim lovers:
Download and install https://github.com/rodjek/vim-puppet.

Do it your way:
Find software they helps enforce good coding habits.  You don't need to use a full blown integrated development environment.

Step 2 - Commit the code.

Use a version control system and commit your code to it often.  I use a shared git repository, Gitlab.  More on Gitlab and how it's configured later.

For Git users:
Download and install git on a Mac.
Download and install Gitlab on a server for private repositories.

Do it your way:
Whatever repository you choose ensure it has the ability to run an action on every commit aka webhook in Gitlab parlance.  This important feature is used later on when we set up automated testing.

Step 3, 4 - Use a pre-commit hook.

I use a git pre-commit hook to check my code before it is committed.  This simple automation runs puppet-lint and a whitespace check.  I added the following to .git/hooks/pre-commit
puppet-lint --with-filename . 
if [ $? != 0 ]; then
  exit 1
fi
It will not commit unless it exits without error.  This helps to enforce good code when it counts: during the development process. 

For Git users:
Read more about git hooks.
Install puppet-lint.

Do it your way:
Find a way to run a script before a commit to the version control system.

Step 5, 6 - Automate testing.

I use Jenkins to automate testing by using a Gitlab webhook to call a Jenkins job URL that will run puppet-lint and "puppet parser validate".

Why run puppet-lint again?  Because Git is a shared repository I can't guarantee that every developer will run puppet-lint as a pre-commit hook.  Running puppet-lint again in the Jenkins job ensures puppet-lint is run at least once.

I could write an entire post on just Jenkins but there are some basic steps you need to accomplish.

For Jenkins users:
  1. Install and configure Jenkins on a server you control.
  2. Create a new job that uses your puppet code repository that can be triggered remotely.
  3. Add executable shell code to run puppet-lint and puppet parser validate on your code.
  4. Install puppet-lint and puppet wherever Jenkins executor is running the job.
  5. Configure to job to succeed only if the tests pass.
Do it your way:
Download and install an orchestration engine that will run whenever code is pushed to the repository.

Step 7 - Test locally.

Testing locally on your workstation may not be suitable for every situation.  It has worked for me in almost all situations.  To test my puppet code I spin up a virtual machine and apply the puppet code to it.  I really like VMware but I find VirtualBox suits my needs and is free.

VirtualBox users:
Download and install VirtualBox.

Do it your way:
Download and install VMware Fusion or whatever virtual machine provider you want. Just make sure it has a Vagrant plugin.
Download and install Vagrant Fusion plugin if you're using VMware.

Step 8 - Use Vagrant.

Testing all my code locally would be a royal pain if it weren't for Vagrant.  Vagrant allows me to spin up a consistent development environment on my local workstation.  I then use the Vagrant provisioners to download the puppet modules and apply the code.

There is no substitute download and install Vagrant.

Step 9 - Use r10k.

If you've never heard of r10k before, go read this article.

There is no substitute download and install the Vagrant r10k plugin.

Step 10, 11 - Putting it all together.

Lastly, you'll need to create the directory structure to support all the configuration files, puppet code, etc.

I've written this shell script which will build a basic directory structure and configuration files supporting Vagrant, Puppet, r10k and hiera.

  1. Download the shell script.
  2. Run as: ./createVagrant.sh linux-server
  3. Add any puppet modules you want available to puppet/Puppetfile. See syntax for Puppetfile.
  4. Apply those modules to your node by creating a puppet node definition in manifests/default.pp.
  5. Define any configuration in hiera and store the config in puppet/hiera.
  6. Run: vagrant up

Summary

This is my first attempt at putting my workflow out there and I hope it helps you get started.  Feel free to switch up the tools.  I happened to use what developers around me are using, that helps whenever I have a question about a particular tool.  Take the workflow a bit at a time or take it all at once.

Monday, August 11, 2014

Samsung I337 Rescue

So I blew up my phone after rooting. Deleted one too many apk files trying to rid my phone of AT&T and Samsung's crapware.  Guess I deleted one file too many and I didn't have a backup. Whoo hoo, I love to live dangerously.

Everything was working but the phone icon was missing.  I couldn't make any phone calls.  Downloading Google Voice or the new Google Phone app didn't fix my issue.  Neither did the myriad of other suggestions I found on various boards: remove cached data, look for a disabled app, etc.

I must have deleted an important apk somewhere along the line.  After a couple days I figured it out and fixed my soft-bricked Samsung Galaxy S4 I337.  If you're in the same boat hopefully this guide can help.

Three problems I needed to surmount: the amount of misinformation on the Internet; the number of ad/spyware laden websites related to fixing Android problems; the lack of downloads and information via the manufacturer's website, Samsung.

First, there's a ton of junk on the Internet. Don't believe half of it.  Half the people on the Android forums don't have a clue as to what they're talking about and the other half are posting links to "solutions" which are really just ad/spyware laden websites.  Which leads us to problem number two.

Second, never, ever pay for "premium" download speed, install a "download helper" or buy into any of the other crap you'll find looking for legit ROM downloads and software.  There are a number of scammers looking to make a buck off any number of poor suckers who are simply trying to unbrick their phone, they will download anything.  Leading us to problem number three.

Samsung needs to get off its ass and make legitimate and verifiable stock ROMs available via its website. Doing otherwise is driving people to shady, secondhand sources.  People make a choice, either trust a potentially evil source or continue with a bricked or soft bricked phone.  Gee, tough decision.

I found a stock image on Mega after a bit of Googling.  Odin is the tool of choice when it comes to reflashing your device.

To fix your soft-bricked or bricked Samsung Galaxy S4 I337:

  1. I started on a "clean" Windows install without any Samsung USB drivers installed.
  2. Download Odin. http://odindownload.com/
  3. Download stock ROM for the Samsung Galaxy S4 I337 AT&T phone. https://mega.co.nz/#!bJMxgARQ!8zxUFpiXhteaLDNemgqRbBSzy6gZJHRCPRW2YndIO4g
  4. Install Odin.
  5. Place phone into download mode. Hold Volume Down, Home and Power buttons.
  6. Connect phone to Windows PC. Note: I performed this in a Windows8 VM running on VMware Fusion on a Mac and it worked just fine.
  7. Odin options: select Auto Reboot, F. Reset Time. Click "AP", select the stock ROM you downloaded.
  8. Click Start.
  9. Wait.
  10. Phone will reboot automagically.
  11. Rejoice.

Friday, January 24, 2014

Monitoring VMware View, Desktop Pool Availability

VMware View is difficult to monitor. Basic things, such as the View Connection server/broker, are easy: the service is either up or down, running out of memory or not.  Availability of desktop pools are a different matter and VMware View has no native alarm or health check features built in to help the View administrator monitor availability.


I was tasked with developing a health check to monitor our VMware View floating, non-persistent, linked clone desktop pools. We had a couple of instances where either provisioning was turned off (human error) or the pool's Max Number of Desktops was set too low.


An avid redditor, I turned to r/vmware first. See my thread here and the many helpful comments.  Some folks said buy vCOPS for View but that's not an option.  A nice redditor named BlowDuck gave me the initial start on a health check.  Their health check was good but I found a couple ways to improve it.


I whiteboarded the following matrix while thinking about their health check and what I was monitoring:
Availability
Remote Sessions
Desktops Available
Max Number of Desktops
Remote Sessions
X
Composer/Storage/vCenter needs to keep up with demand as more users log in.
Pool needs to be configured to support maximum number of sessions.
Desktops Available
X
X
Number of Desktops Available desktops will approach the Maximum Desktops Configured as more users are entitled to the pool. Early warning detection before the number of Remote Sessions reaches the Maximum Desktops Configured.
Max Number of Desktops
X
X
X


I decided to change their monitoring script after seeing the intersection of the variables involved and what they represented.


First, BlowDuck's original monitoring script called for monitoring pool by looking at the number of remote sessions and checking if it was within X of the pool's Max Number of Desktops.  This is a static way to monitor the pool. The health check would need to be rewritten based on the number of users or size of the pool.

Second, Max Number of Desktops is not a representation of the actual number of running and available desktops, only what could be potentially available. What if a provisioning error or the environment's capacity prevented the pool from reaching Max Number of Desktops to service an increasing number of remote sessions?

My script addresses those issues by using the number of remote sessions and the number of desktops for a pool.  It calculates what percentage of desktops within the pool are currently utilized.  The script is passed WarningLevel and CriticalLevel arguments as percentages which tell it when to alarm, i.e. alarm at 75% utilization.


A couple notes on using number of remote sessions and desktops: 1) it's slow, 2) it isn't a true representation of what is available.

First, there is no extension data in View PowerCLI to make the counts readily available so the script uses the object count method.  This makes the script slow, really slow. It will take almost 3 minutes to count a pool of 150 desktops and sessions.  The script is slow I had to modify our Nagios configuration in several places, increasing the timeout values, to allow the script to run.

Second, the script doesn't really count available desktops. It counts the number of desktops in a pool. The state of those desktops is unknown to the script. They could be Available, Agent Unreachable, Deleting, Deleting(missing), Customizing or any other state.

You can find the script on GitHub at https://github.com/mmarseglia/view-tools/blob/master/desktopsAvailable.ps1.