Shell

PowerShell parameters

There is no question that PowerShell is a big improvement over the old DOS shell scripting. If you want to do anything more complex that piping commands from one to cmdlet to another you are probably going to end up using functions or passing parameters into your scripts. While you can add parameters to a function in the usual way by including them in parenthesis after the function name; if you want your function or script to behave like a cmdlet you are going to have to learn a new way – advanced functions.

The standard way to work with parameters is to include a param() call. This does little more than move the parameters from the parenthesis to further down the script but it does make splitting parameters over multiple lines more elegant – which you are probably going to do.

PowerShell automatically uses the name of the variable as the parameter identifier. So param($Query) will allow you to call the function with -Quest “To seek the Holy Grail”

The advanced features are activated by including [Parameter(…)] before the parameter name. The parameter call accepts multiple advanced features separated by commas. For example, if you want to make sure the parameter is included add Mandatory = $true

After the parameter block, you can optionally add validation in format [type(condition)]. A complete list can be found in the link above. For example to ensure you don’t get an empty parameter include [ValidateNotNullOrEmpty()] before the parameter name.

There is also common parameters that most cmdlets accept. Rather than enter these each time, just include the command [CmdletBinding()] before the param() call. The function will now automatically accept the following common parameters:

  • Verbose
  • Debug
  • ErrorAction
  • WarningAction
  • ErrorVariable
  • WarningVariable
  • OutVariable
  • OutBuffer
  • PipeLineVariable

These common parameters also propagate down through cmdlets and other functions that your function calls. So for instance, if you pass -Verbose into your function and your function uses Invoke-RestMethod, this cmdlet will be called with -Verbose automatically and you will see the details of the HTTP request and response.

You can also add help to your function or script by including a formatted <# #> block directly after your function definition or at the very start of your script similar to the docstring in Python. For details about how to format the comment block see this Micrsoft blog post.

A working example is always better than dry text and I’ll be uploading an example soon.

Advertisements

pip + virtualenv = pipenv

I have long argued that one of the reasons Node took off so quickly was the inclusion of npm for package management. For all its faults, it allows anyone to quickly get up and working with a project and to build powerful applications by utilising other libraries. What’s more, by being local first, it avoids some of the dependency problems caused by different applications requiring different versions of the same library (at the expense of disk space and a little RAM).

Python never initially had a package manager but pip has evolved into the de facto standard and is now included with the installer. All packages are installed globally on the machine; this makes sense given Pythons’ history but is not idea. To have local packages just for your app you needed virtualenv or a similar tool.

The obvious next step to close the gap with npm would be a single tool that would set up a local environment and install the modules into. And that is exactly what pipenv does. It was created by by Kenneth Reitz (the author of the requests module which I’ve used in several posts ) and has quickly gained popularity in the last year.

Lacey has done a good write-up of the history that lead to pipenv on this blog post and there is a full guide available here, but it is just as simple to show you with an example. First install pipenv with pip install pipenv

Then you can create a project, with virtualenv and install the requests module with the following

mkdir pipenvproject
cd pipenvproject
pipenv install requests

That’s it (although personally I would have liked to see a pipenv init command). To prove there is a virtual environment there use the shell option to switch to it (no more remembering the patch to the batch file). To prove this try the following

pipenv shell
pip list
exit
pip list

The first pip list should just show requests and its dependencies. After exiting out of the virtual environment shell, the second pip list will list all of the packages installed on your system.

Running SQL scripts

PowerShell has a very useful Invoke-SQLCmd cmdlet to run T-SQL scripts if you install the sqlps module. This module is installed as part of SQL Server Management Studio (SSMS) but you may not want to install the whole package just to run scripts. However you can install just the PowerShell module and a couple of dependencies.

First you need to download module and two dependencies. There are different versions for SQL 2012, 2014 and 2016 but all can manage instances of SQL Server back to 2000 (SQL Server 2000 must be running SP4 or later and SQL Server 2005 SP2 or later) so there is little reason not to just go with SQL 2016. You need the PowerShell Extensions for Microsoft SQL Server (PowerShellTools.msi) module plus the dependencies SQL Server Shared Management Objects (SharedManagementObjects.msi) and CLR Types for Microsoft SQL Server (SQLSysClrTypes.msi). All are available in both 32- and 64-bit depending upon which version of PowerShell you want to run.

SQL Server 2016 Feature Pack (download button and tick packages)
SQL Server 2014 Feature Pack (download button and tick packages)
SQL Server 2012 Feature Pack (links directly to the package download in the Install Instructions tab)

Once downloaded you need to install the CLR Types first, following by the Shared Management Objects and then the PowerShell module. This can be automated with the following commands:

msiexec /i SQLSysClrTypes.msi /qn
msiexec /i SharedManagementObjects.msi /qn
msiexec /i PowerShellTools.msi /qn

Once installed you are ready to go. Later versions of PowerShell will implicitly import the module as needed. Earlier versions will need to add the line Import-Module sqlps

To quickly test everything is working, open a new PowerShell console window and enter the following

Import-Module sqlps
Invoke-Sqlcmd -ServerInstance servername -Query "SELECT @@SERVERNAME"

TFS PowerShell cmdlets

With Git or Mercurial (or just about any other source control product) you take command line functionality for granted. However with TFS, Microsoft has made getting the PowerShell cmdlets convoluted. Although you can download the TFS 201x Power Tools which contain the PowerShell cmdlets, you need to have Visual Studio installed. If you just want to download the latest files of a project as part of task this is overkill and you probably don’t want Visual Studio installed on all your servers. However there is a way to get the cmdlets on to other servers with a bit of hacking.

First you need the assemblies. I would take them from the GAC on the TFS server. This ensures you have the correct DLL versions. You can copy all the DLLs you need with the following PowerShell command – replace C:\Temp\TFS with the path you want to save the DLLs to.

gci "$env:windir\assembly\GAC_MSIL\Microsoft.TeamFoundation.*" -recurse -include "*.dll" | % { cp $_.fullname "C:\Temp\TFS\$($_.name)" }

Copy these to the destination server. You can no longer copy DLLs back into the GAC. Instead you need to install them. You could do this with GACUtil.exe but this only comes bundled in with .Net SDK (or a Windows SDK) and we don’t want to install a massive SDK on all our servers. Thanks to this TechNet article, you don’t have to. Just run the following PowerShell command in an elevated shell (run as administrator). Again replace C:\Temp\TFS with the path you saved the DLLs to.

[System.Reflection.Assembly]::Load("System.EnterpriseServices, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a")
$publish = New-Object System.EnterpriseServices.Internal.Publish
gci C:\Temp\tfs\*.dll | % { $publish.GacInstall($_.fullname) }

At this point you can write PowerShell scripts to call TFS, if you are prepared to reference the assembly directly with the exception of DataStoreLoader (Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader) which comes in both 32bit and 64bit versions. I’ve yet to find a suitable way of getting these across.

We obviously want to use the cmdlets. On the reference machine where you installed Visual Studio and TFS Power Tools, go the C:\Program Files (x86)\Microsoft Team Foundation Server 201x Power Tools\ directory. You need to copy the Microsoft.TeamFoundation.PowerTools* files and the PowerShell folder to the target server. I’ll assume you copy them to C:\TFSPowerShell\ for the rest of the article.

So that PowerShell knows about the snapin, you need to create a registry entry. The easiest way is to copy the following to notepad (other text editors are available) and save it with a .reg extension (the save as type will need to be set to All files (*.*) if you are using notepad otherwise it will add a .txt extension) then run it. Change C:\\TFSPowerShell to the location you saved the files to (notice the double backslash – it needs to be escaped) and the version with the relevant TFS version number (see below).

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\PowerShell\1\PowerShellSnapIns\Microsoft.TeamFoundation.PowerShell]
"PowerShellVersion"="2.0"
"Vendor"="Microsoft Corporation"
"Description"="This is a PowerShell snap-in that includes the Team Foundation Server cmdlets."
"VendorIndirect"="Microsoft.TeamFoundation.PowerShell,Microsoft"
"DescriptionIndirect"="Microsoft.TeamFoundation.PowerShell,This is a PowerShell snap-in that includes the Team Foundation Server cmdlets."
"Version"="11.0.0.0"
"ApplicationBase"="C:\\TFSPowerShell"
"AssemblyName"="Microsoft.TeamFoundation.PowerTools.PowerShell, Version=11.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a"
"ModuleName"="C:\\TFSPowerShell\\Microsoft.TeamFoundation.PowerTools.PowerShell.dll"
"CustomPSSnapInType"="Microsoft.TeamFoundation.PowerTools.PowerShell.TFPSSnapIn"

The following table gives you the TFS year to internal version conversion for use in the above file. The packed number can be used in the full reference link below if you want to link to the version number directly.

TFS year version packed
2010 10.0.0.0 100
2012 11.0.0.0 110
2013 12.0.0.0 120
2015 14.0.0.0 n/a

 

And that’s it. You can now add the snapin with Add-PSSnapin Microsoft.TeamFoundation.PowerShell and the PowerShell cmdlets will be there ready to use. If you have a 64-bit version of Windows and you also want to use this in the 32bit version of Powershell you will create a second link in the registry. The file is the same as above just with the location changed to

[HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Microsoft\PowerShell\1\PowerShellSnapIns\Microsoft.TeamFoundation.PowerShell]

Time for a quick demo. Lets download all the latest files for a specific tfspath to the folder C:\TFSfiles with the following. Just change the $tfspath variable to your path in TFS and the URL to your TFS server.

Add-PSSnapin Microsoft.TeamFoundation.PowerShell
$tfspath = "$/collection/project/folder"
$server = Get-TfsServer -Name "http://tfsserver/tfs/collection"
Get-TfsChildItem $tfspath -Server $server -Recurse | ?{$_.ItemType -ne "Folder"}  | %{$_.DownloadFile(($_.ServerItem).Replace($tfspath,"C:/TFSfiles") ) }

Now you can have all your scripts source controlled and updated automatically on the servers before they are run. Documentation is sketchy for the commands but there is a full reference to the objects being used in the background. Don’t forget you can run PowerShell scripts from your Python code.

Vagrant and Hyper-V

Having previously dismissed Vagrant as a VirtualBox tool I’ve revisited it and found it now supports (from v1.5 onwards) Hyper-V. THis is important if like me you have Windows 8.1 Pro or 10 Pro and have the Hyper-V feature turned on as VirtualBox will not work (see this blog for a workaround if it is really necessary).

When creating VMs, Vagrant needs to be able to manage Hyper-V and create a share for the VM to link to. While you can give a standard user the ability to do the former (which I highly recommend, see the end of the article for details), creating a share needs administrator rights. So unless you are just bringing up an isolated VM, you will need to run the console as administrator to run the vagrant up command.

The other change to make is that Vagrant will not always detect Hyper-V and default back to VirtualBox. You can force it to use Hyper-V by adding –provider=hyperv to the command line each time. A better solution is to add an environment variable named VAGRANT_DEFAULT_PROVIDER with the value hyperv – this can be done from Control Panel -> User Accounts -> Change my environment variables.

The last note is about virtual switches and networking. There are some limitations with Vagrant and networking as detailed here. In order for Vagrant to detect the IP, the virtual switch you connect to must be external. If you are still having problems check out the following post.

You are almost ready to go. You should be able to follow the getting started tutorial now, and see the VM being created in Hyper-V Manager, but it is unlikely you will be able to run vagrant ssh unless you have a command line ssh installed. Putty is probably the most popular SSH tool for Windows but you don’t want to manually change the private key and have to enter the details each time.

Thankfully there is a plug-in to do this for you from Nick Downs. Download Putty and make sure it’s directory is included in your path environment variable. Then run the following command to install the plug-in.

vagrant plugin install vagrant-multi-putty

You can now use vagrant putty instead of vagrant ssh.

To grant a standard user Hyper-V administrator rights simply add them to the local Hyper-V Administrators group (Computer Management -> System Tools -> Local Users and Groups -> Groups). Once added log off and on for the change to take affect. With this done you will be able to use the Hyper-V Manager snap-in as well as vagrant commands

Running JavaScript

I’ve covered running PowerShell scripts using the powershell.exe command in this previous post. The same principle will allow you to run any script engine. For example, you can run JavaScript as long as you have node installed. Assuming you add node to the path variable (the default) there are no additional options required, just specify the script to run as the argument.

I’ll also use this opportunity to save a couple of lines. One of the many great things about Python is it gives you the source code to standard library functions. This is a great way to improve your Python ability, looking at how problems were solved in the library. I want to see if I can improve on the try catch block so I need to know what subprocess.check_output actually does.

I can see the code for check_output by opening subprocess.py from the lib directory. Here we can see that the main work of check_output is done by just three lines, initial the Popen object, then call its communicate and poll methods. We can take the 6 lines of the try catch block and replace them with these 3 giving us the following.

import subprocess, tempfile, sys, os

def js(command):
    commandline = ['node.exe']
    with tempfile.NamedTemporaryFile(suffix=".js",delete=False) as f:
        f.write(command)
    commandline.append(f.name)
    process = subprocess.Popen(commandline,stdout=subprocess.PIPE)
    result, unused_err = process.communicate()
    exitcode = process.poll()
    os.unlink(f.name)
    return exitcode , result

retcode, retval = js("console.log('Hello Python from JavaScript (node.js)';\n")
print("Exit code: %d\nReturned: %s&" % (retcode, retval))

You can make the same changes to replace the try catch block on the PowerShell post from a few weeks back.

Running Powershell

After a slow start, there is now a large availability of powershell cmdlets to control most things in Windows. Whats more, powershell cmdlets are sometimes the only programmatic way to control some software. This means at some point you are likely to need to use cmdlets from Python. Until there is a native way of doing this with a Python module the easiest way is with subprocess as done previously for shell commands.

Whole Powershell scripts, not just single commands, can be ran with the powershell.exe command. There are three command line options that will be useful, -NoLogo removes the banner at startup, -ExecutionPolicy if set to bypass should run the script regardless to what the current execution policy is without changing the settings and -File to specify the script to run.

So just save the command(s) to execute into a temporary file and then call powershell.exe with the above options and the file name of the temp file to run. One oddity is powershell.exe requires the file to have a .ps1 extension or it will refuse to run it. You can do this by passing suffix=’.ps1′ into NamedTemporaryFile.

Putting all this together gives the following

import subprocess, tempfile, sys, os

def posh(command):
    commandline = ['powershell.exe',' -NoLogo','-ExecutionPolicy','Bypass','-File']
    with tempfile.NamedTemporaryFile(suffix=".ps1",delete=False) as f:
        f.write(command)
    commandline.append(f.name)
    try:
        result = subprocess.check_output(commandline)
        exitcode = 0
    except subprocess.CalledProcessError as err:
        result = err.output
        exitcode = err.returncode
    os.unlink(f.name)
    return exitcode , result

retcode, retval = posh("Write-Host 'Hello Python from PowerShell'\nexit 1")
print("Exit code: %d\nReturned: %s" % (retcode, retval))

Note that to get the output in the case of an error you need to get it from error object.

SSH tunneling for security

Running your administration scripts and monitoring from a remote server can bring a lot of benefits. I’ve covered simple web servers and threading in previous posts. However this does create one big issue, securing it. Setting up HTTPS and storing user log ins adds a lot of complexity.

However if you are running this on a *nix server, or if you are prepared to install an SSH server on your Windows box, you can get encryption and authentication almost for free with SSH tunneling. This allows traffic to be sent down the encrypted SSH connection to other ports on the server – or even other servers. Authentication with your username and password/certificate comes at the start when you establish the SSH connection.

For example, you could make a simple web server in Python binded to port 8082 on the localhost. This would not be accessible to any other computer on the network, only someone directly connected to the server. Rather than run a text only browser from the SSH terminal, you could tunnel all traffic going to port 8081 (say) on your machine to port 8082 on the remote machine. Then while the SSH connection was open, point your browser at http://localhost:8082/ to view the web page on your local machine as if it was a regular server.

So how to configure this? I’ll assume you are using PuTTY. Enter the hostname or ip address of the server as usual. Then expand the Connection and SSH nodes to show the Tunnels page and make the tunnelfollowing changes

  • Remove the default tick against local ports accept connections from other hosts
  • In source port, put your local port you want to use. This can be anything you like – in our example above it was port 8081
  • In destination port server and port you want to access from the remote server – in our example above this would be 127.0.0.1:8082

And that is it. The settings should now look something like the image on the right.

Save this configuration for future use in the normal way and you are good to go.

Reading numbers

Often there is a difference in the way your program needs to use a number and the way someone using the program would want to enter it. A good example is file size. Your program is probably going to want to know the number of bytes but person using your program is going to want to be something a lot more flexible, like the ability to specify the size in Mb. And not just the as a whole megabyte either, how about allowing 0.5M.

Thankfully this flexibility can be achieved without too much work. First of all we need a reliable way of converting a string into a number. Ideally this should be smart about it so that if you enter 3 you get an integer and if you enter 2.5 you get a floating point number back. Add in the ability to specify a default value and to turn off floating point numbers and you end with the the following fairly self explanatory function.

def tonum(string, default=0, allow_float=True):
    num = default
    try:
        num = int(string)
    except ValueError:
        if allow_float:
            try:
                num = float(string)
            except ValueError:
                pass
    return num

Now to put this to good use. The first thing to you need to know is if a valid letter has been included. If so convert the number part and then multiply it by the whatever the letter stands for. Finally make sure the result is an integer (you cannot have fractions of a byte). If there is no letter then just convert to an integer. This can either be done by setting allow_float=False in the call to tonum if you want only strict integers or converting like I have done if you want to allow some more exotic ways of specifying numbers.

# assume the string to convert is in the variable sizestr
multiplier = { 'k': 1024, 'm': 1024*1024, 'g': 1024*1024*1024, 't': 1024*1024*1024*1024 }
if sizestr[-1] in ('k','K','m','M','g','G','t','T'):
    numbytes = int(tonum(sizestr[:-1])*multiplier[sizestr[-1].lower()])
else:
    numbytes = int(tonum(size))

These 16 lines of codes handle some really strange looking examples like 1e3k for 100Kb (1024000 bytes). The code naturally lends itself handling arguments passed in to program (from getopts or similar), from a GUI or read in from a file.

Hopefully this will give you a start if you need to handle something similar.