Rust missing component

Started to look at the Rust programming language both for integrating with Python and for programming Windows and the Raspberry Pi. First problem I hit was the following error when trying to use a Rust VS Code extension.

toolchain 'stable-x86_64-pc-windows-msvc' does not contain component 'rls' for target 'x86_64-pc-windows-msvc'

This was one of the situations where searching on the error message did not seem to help. The problem was actually a broken upgrade. I had previously had Rust installed but never got started with it. Either the initial installation was broken or the upgrade did not work correctly.

The fix, as is often the case in these situations, was to completely uninstall Rust with the following command.

rustup self uninstall

Then to reinstall from scratch.

Advertisements

Routing table

I recently created a Azure SQL Managed Instance as a proof of concept for a project I was working on. Created through the portal it also created a routing table with 31 routes to the Internet which initially confused me.

On reflection, suspecting a simple answer, I set about working out the routing which I’ve shared.

Combined RouteIndividual CIDR routes
0.0.0.0 to 9.255.255.2550.0.0.0/5 + 8.0.0.0/7
11.0.0.0 to 172.15.255.25511.0.0.0/8 + 12.0.0.0/6 + 16.0.0.0/4 + 32.0.0.0/3 + 64.0.0.0/2 128.0.0.0/3 + 160.0.0.0/5 + 168.0.0.0/6 + 172.0.0.0/12
172.32.0.0 to 192.167.255.255172.32.0.0/11 + 172.64.0.0/10 + 172.128.0.0/9 + 173.0.0.0/8 + 173.0.0.0/8 + 174.0.0.0/7 + 176.0.0.0/4 + 192.0.0.0/9 + 192.128.0.0/11 + 192.160.0.0/13
192.169.0.0 to 255.255.255.255192.169.0.0/16 + 192.170.0.0/15 + 192.172.0.0/14 + 192.176.0.0/12 + 192.192.0.0/10 + 193.0.0.0/8 + 194.0.0.0/7 + 196.0.0.0/6 + 200.0.0.0/5 + 208.0.0.0/4 + 224.0.0.0/3

Yes, this basically routes everything apart from the private IP address ranges (10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) to the Internet.

Github API

In the old days of centralised version control systems, the number of repositories tended to be small as creating new ones usually involved convincing the admin to provide (usually expensive database) space for the repo. In these decentralised days where anyone can create or clone a repo and the rise of automation has usually been underpinned by a repo, the number of repos the admin is responsible for has ballooned to tens or sometime even hundreds of repos.

At this scale, manually configuring repos time consuming and error prone. Thankfully GitHub provides an API for management and there is a Python wrapper around the API called PyGithub. I’ve had a few tasks to do in Github recently so I’ve come up with a few automation scripts.

From the documentation, you first need to authenticate using either username and password or preferably an access token. There is also a variation if you have an enterprise account with its own domain name; as I haven’t I’ve ignored this option but I wanted to write the scripts is such a way as to make adding this easy.

Once authenticated you are most likely to want to limit the repos to those within your organisation using the get_organization method. This is already 4 options (ignoring enterprise accounts) just to list the repos. As I intend to have several scripts it makes sense to standardise this with the following 3 functions

githuboptparse
Create an optparse (parameter) parser to read in all standard options. Returns the parser so additional options can be added

githublogin
Authenticate with GitHub, the different methods are provider by
githuboptparse so the options need to be passed into this function

get_filtered_repos
Return all the repos in the organisation (if the organisation was specified at the command line) otherwise return all repos the user has access to. This is set by githuboptparse so the options need to be passed into this function. You also need to be authenicated so the return value of
githublogin also needs to be passed in. Returns an iterator just like get_repos() does.

In order to share these between the scripts I created a _utils.py file. With the standard options now abstracted, the boilerplate code to list all the repos reduces to just 6 lines, import _utils.py, create parser with githuboptparse, parse options, authenticate with githublogin, iterate through repos with get_filtered_repos and print repo name.

Most organisations will have naming convention so it is likely you are going to want to filter the repos further based on some criteria. This will involve modifying the iterator return, which sounds tricky but in fact is fairly easy to achieve as this contrived example shows

names = ('title','first','middle','last','suffixes')
for n in names: # default iterator returns all elements
    print(n)

def no_suffixes ( iteratee ):
    # ensure no suffixes are returned by iterator
    for i in iteratee:
        if i != 'suffixes':
            yield i

for n in no_suffixes(names):
    print(n) # look, suffixes has gone

Using this principle I added the ability to include only repos that contain a given regex and to exclude repos that match a given regex. I added the options to the githuboptparse function and then changed the filtered the iterator returned from get_filtered_repos  in the same way as above. Use githubrepos.py to test out this filtering.

Role-based Azure Certification

Over the summer, Microsoft announced they would be retiring some of their existing Azure exams (70-532/3/5) and replacing them with a role orientated exam, where the tested the functionality of the resources typically used by a role rather than all of the functionality that resource has to offer. For more information see this post on Build Azure.

Apart from announcing this just after I started studying for the old exams, there is too little information at present to know if this is a good step forwards or just a silly rebranding exercise like renaming VSTS to Azure DevOps. So I’ve just started studying for the Administator Certification, I’ll let you know how I get on.

The natural end goal will be the DevOps Certification when it is released.

Diff release definition environments in VSTS

Recently I had to update a release definition in VSTS and found that the tasks for releasing to the various environments were slightly different, often for options which were collapsed by default. This made manually checking both difficult and error prone. Assuming this would be a standard problem, I checked for a tool to do this but could not find one.

A release definition is just a json file and if you export it you see the environments settings (tasks, variables etc.) are stored in one big array called naturally enough environments. So you could just manually extract the ones you want and use a diff tool.

I thought I could do better. The VSTS API allows you to get the release definition and Python is so much better at automating the manipulation of json documents. Needless to say it wasn’t quite as easy as it sounded but eventually I got my VSTS diff script working. I’ve uploaded it to GitHub in case anyone has a similar problem and could find the script useful.

Azure resource group deployments

Azure comes with some strange quotas. One of them is a limit on Resource Group Deployments. If you go to a resource group, you can see the number of succeeded deployments. The max allowed is 800. Once you reach it you will see an error message similar to the following:

Creating the deployment 'foobar' would exceed the quota of '800'. The current deployment count is '800', please delete some deployments before creating a new one.

What is also strange is there is no way to fix this from the portal. Instead you have to rely on either the CLI or PowerShell. The following PowerShell script is a quick fix that removes all deployments older than 2 months. Just replace myRG with your resource group name.

$keepafter = (Get-Date).AddMonths(-2)
Get-AzureRmResourceGroupDeployment -ResourceGroupName myRG |
    Where {$_.Timestamp -lt $keepafter} |
    Remove-AzureRmResourceGroupDeployment

Be warned this can take a long time (as in all day long)! Change the first line to change the max age.

Dates and time

Python has a reasonably good standard library module for handing dates and times but it can be a little confusing to a beginner probably because the first code they encounter will look something like the below with very little explanation.

import datetime
print("Running on %s" % (datetime.date.today()))
myDate = datetime.datetime(2018,6,18,16,13,0)

Why is it datetime.datetime? It is a simple explanation but one I’ve rarely seen included.

All Pythons classes for handling dates and times are in the module called datetime (naturally enough). This module contains a class for dates with no time element (datetime.date), a class for times (datetime.time) and a class for when you need both called unsurprisingly (but a little unfortunately) datetime.datetime, hence the code above.

It also contains 2 more classes; datetime.timedelta which is the interval between two dates / datetimes (the result of subtracting one datetime from another) and tzinfo, standard for time zone info, which is used to handle timezones in the time and datetime classes.

To add to the confusion, if you want to to get the date / time / datetime as of now, there is not a standard across the three; datetime uses the now() method, date uses the today() method and time does not have one! You have to use datetime and get the time part as below

import datetime
# Get the date and time as of now as a datetime
print(datetime.datetime.now())
# Get the date as of now (today)
print(datetime.date.today())
# Get the time as of now - have to use datetime!
print(datetime.datetime.now().time())

The confusion does not end there. If you want to format the date / time / datetime in a particular way you can use the strftime() method – probably short for string format time. The same method exists in all classes. Why it is called time and not date or something more generic is beyond me, datetime.date.strftime() makes little sense.

If you are reading in strings and need them parsed into a date / time / datetime there is strptime() method – probably short for string parse time – but this only exists in the datetime class. So you have to use a similar trick as above and create a datetime and extract just the date or time part.

Once you get passed the quirks above, you should find the datetime module straight forward to use. However if you do find yourself needing a library with more power, try the dateutil library. It can be installed with the usual pip install python-datetutil command.

PowerShell parameters

There is no question that PowerShell is a big improvement over the old DOS shell scripting. If you want to do anything more complex that piping commands from one to cmdlet to another you are probably going to end up using functions or passing parameters into your scripts. While you can add parameters to a function in the usual way by including them in parenthesis after the function name; if you want your function or script to behave like a cmdlet you are going to have to learn a new way – advanced functions.

The standard way to work with parameters is to include a param() call. This does little more than move the parameters from the parenthesis to further down the script but it does make splitting parameters over multiple lines more elegant – which you are probably going to do.

PowerShell automatically uses the name of the variable as the parameter identifier. So param($Query) will allow you to call the function with -Quest “To seek the Holy Grail”

The advanced features are activated by including [Parameter(…)] before the parameter name. The parameter call accepts multiple advanced features separated by commas. For example, if you want to make sure the parameter is included add Mandatory = $true

After the parameter block, you can optionally add validation in format [type(condition)]. A complete list can be found in the link above. For example to ensure you don’t get an empty parameter include [ValidateNotNullOrEmpty()] before the parameter name.

There is also common parameters that most cmdlets accept. Rather than enter these each time, just include the command [CmdletBinding()] before the param() call. The function will now automatically accept the following common parameters:

  • Verbose
  • Debug
  • ErrorAction
  • WarningAction
  • ErrorVariable
  • WarningVariable
  • OutVariable
  • OutBuffer
  • PipeLineVariable

These common parameters also propagate down through cmdlets and other functions that your function calls. So for instance, if you pass -Verbose into your function and your function uses Invoke-RestMethod, this cmdlet will be called with -Verbose automatically and you will see the details of the HTTP request and response.

You can also add help to your function or script by including a formatted <# #> block directly after your function definition or at the very start of your script similar to the docstring in Python. For details about how to format the comment block see this Micrsoft blog post.

A working example is always better than dry text and I’ll be uploading an example soon.

pip + virtualenv = pipenv

I have long argued that one of the reasons Node took off so quickly was the inclusion of npm for package management. For all its faults, it allows anyone to quickly get up and working with a project and to build powerful applications by utilising other libraries. What’s more, by being local first, it avoids some of the dependency problems caused by different applications requiring different versions of the same library (at the expense of disk space and a little RAM).

Python never initially had a package manager but pip has evolved into the de facto standard and is now included with the installer. All packages are installed globally on the machine; this makes sense given Pythons’ history but is not idea. To have local packages just for your app you needed virtualenv or a similar tool.

The obvious next step to close the gap with npm would be a single tool that would set up a local environment and install the modules into. And that is exactly what pipenv does. It was created by by Kenneth Reitz (the author of the requests module which I’ve used in several posts ) and has quickly gained popularity in the last year.

Lacey has done a good write-up of the history that lead to pipenv on this blog post and there is a full guide available here, but it is just as simple to show you with an example. First install pipenv with pip install pipenv

Then you can create a project, with virtualenv and install the requests module with the following

mkdir pipenvproject
cd pipenvproject
pipenv install requests

That’s it (although personally I would have liked to see a pipenv init command). To prove there is a virtual environment there use the shell option to switch to it (no more remembering the patch to the batch file). To prove this try the following

pipenv shell
pip list
exit
pip list

The first pip list should just show requests and its dependencies. After exiting out of the virtual environment shell, the second pip list will list all of the packages installed on your system.

Log Parser GUI

Log Parser is an old but still incredibly useful utility which I covered way back in this blog post. If you are fighting log files then I still recommend you give the post a read.

Since that post, v2 of a GUI for Log Parser has been released . For those who are more accustomed to using SSMS or similar to write queries this may be more to your tastes. It can be downloaded from here. See this Microsoft blog post for a summary of what has been added to v2.

There is already a decent tutorial from Lizard Labs on using the GUI but it is not very clear about where the options are so refer to the image below if you struggle to get started.

LogParserGUI

A little aside for Windows 8/ Server 2012 and above accessing the event log files. Don’t try to open the event logs directory (%SystemRoot%\system32\winevt\logs by default) directly. You will probably be unable to open it because the folder as it does not have All Application Packages in the security permissions.

There is no need to do this. Log Parser already knows how to access event logs, just use the event log name, Application, Security or System, as shown in the tutorial and example above.