Posting to Teams

I can’t believe its been 4½ years since my Posting to Slack blog post (which I’ve had to update a few links on) so it’s probably time for a similar post to Microsoft Teams. Unsurprisingly, as Teams in Microsoft’s blatant rip-off of Slack, the process very similar. First create the webhook with these instructions and then post a json document as detailed here to the webhook URL obtained in the first step.

The format can look a bit daunting but the only required field is text (again just like Slack). This means we can post to the Teams webhook with the same few lines as follows:

import requests
# see your integration config for webhook URL
webhookurl = 'https://outlook.office.com/webhook/...'
payload = { 'text': 'This is a test post' }
response = requests.post(webhookurl,json=payload)
print("%d - %s" % (response.status_code,response.reason))

You can find the full documentation on the JSON format here. Note the following fields are not supported; heroImage, hideOriginalBody, startGroup, originator and correlationId.

Azure function bindings

I covered the basics of a Python Azure Function before; now I’m going to look at the bindings, function.json, as a way to get additional settings or storage. Most of the information can be found in the Python developer guide.

The guide recommends that the Azure functions are kept within there own folder (called __app__ but you can change this) with tests and other files outside of this directory so they are not packaged up with the deployment. This is not the default if using the wizard in VS Code; if you move your code into a sub-directory afterwards you will need to re-initialise in VS Code in order to do local testing. Despite what the guide says, the .gitignore file should remain in the root. If you are doing local testing you should also have Azure Storage Emulator running.

The majority of function.json is taken up with the bindings array. The bindings link your function to other resources. All bindings will have the following three fields with additional fields determined by the type:

  • name: Name of the binding; this should match the parameter name in your function entry point apart from $return which binds to the returned output of the funtion. Note the name cannot contain underscores (unfortunately).
  • type: As a minimum, there will be one binding with the trigger type used to call the function (HTTP, timer, queue etc.). Additional types can also be bound to the function like storage tables.
  • direction: Either in (data is to be passed in to the function) or out (function will write data out to the binding).

Adding an additional binding to a storage table is a useful way to provide the function with configuration. Function apps are already attached to a storage account (connection string is stored in the AzureWebJobsStorage app setting); you can create a table in this storage account and put the configuration in there. Step 7 in the step-by-step guide above touched on storage account bindings – full details on binding a table can be found here.

When adding the binding in function.json, the partitionKey and rowKey are optional. If you do specify both this will point to a unique entity and the json string passed in with be an object with all the fields including partitionKey and rowKey. If you do not specify both then the json string passed in with be an array of objects that match the given partitionKey or rowKey. If you do not specify either the array will be the entire contents of the table.

Lets bind a table entity from the configuration table to our function by adding the following object to our bindings array:

{
      "name": "config",
      "type": "table",
      "direction": "in",
      "connection": "AzureWebJobsStorage",
      "tableName": "configuration",
      "partitionKey": "function",
      "rowKey": "myfunc"
}

We can then use load this configuration in our Azure function with the following code. Note the example is for a HTTP trigger function.

import json
import azure.functions as func

def main(req: func.HttpRequest, config: str) -> func.HttpResponse:
    configuration = json.loads(config)

That’s it; you now have a configuration dictionary with all the fields from the binded table entity. Also note that because the table is set in function.json, you can have different configuration entities for different functions. It’s a lot neater than having hundreds of app settings.

If using table(s) for configuration, this will necessitate creating them beforehand. You can use Azure Storage Explorer to do this; both to manage the tables in Azure and in the storage emulator when running locally.

Another common type to bind to is a queue (either a storage queue or a service bus queue). If used as a trigger you can use the function to respond to a message being placed into the queue. Binding to the function output allows you to write messages into a queue. Combining the two allows one function to call another; this is the Microsoft preferred method of doing this, rather the directly invoking the function with a HTTP request.

Binding a storage queue to the output is covered in step 7 of the tutorial. Creating a new function with an Azure Queue Storage trigger will create the necessary boilderplate code to use. There is not much more to it than that.

If you get the error message Value 'func.Out' is unsubscriptable when adding the queue to the function parameters try uninstalling pylint with the following command pip uninstall pylint – thanks to Stack Overflow as usual for this.

If you are looking for samples of other types of bindings check out the following repo.

Python and virtual environments in VS Code

I’ve covered virtual environments before and how to set up and use them in Visual Studio. With the popularity of VS Code growing it was about time I did a quick roundup of running Python in general, and virtual environments in particular, with VS Code. If you have not used VS Code before with Python here is a quick tutorial.

First thing is to make sure you have the Microsoft Python extension for Visual Studio Code installed. There are a lot of other extensions you can add to improve productivity but this is the only one I’ll assume is installed. This extension will activate whenever you have a file open with the .py extension.

With a .py file open, you should see in the status bar along the bottom the currently selected interpreter to use if you run it that script, e.g. Python 3.8.2 64-bit – if you want to select a different interpreter click on this section of the status bar. All the Python environments VS Code can find will be listed along with the option of entering a path for an interpreter it cannot find.

There is no menu to add or create a virtual environment in VS Code like there is in Visual Studio, instead the creation must be done manually. This can either be done using pipenv or assuming you are using Python 3, from the command line with the following:

python -m venv .venv
code .

The first line creates a folder called .venv containing all the necessary files and structure. The second line starts VS Code (assuming you have it in your path). Note the dot at the end; this sets the working directory to the current directory. Now when you open a file ending in .py, as well as activating all the extensions installed that are linked to Python it also look for any virtual environments directly off the working folder. You should see the status bar change to show the Python environment is now the version of foldername of the virtual environment, e.g. Python 3.8.2 64-bit (‘.venv’: venv) – running the Python script will automatically run inside the virtual environment without any additional steps.

Not only that, if you open a new terminal it will automatically run the script to enable the virtual environment. So to install (or add more) packages into the virtual environment you can just open a terminal and type in

pip install -r requirements.txt

That’s all there is to it. While not seamless, with a couple of commands you can be running everything in virtual environments like a pro. If you want some more extensions, check out this list.

Azure table storage

Storage accounts in Azure can be used for storing four types of information

  • Blob storage for data blobs using a REST interface (probably the most common use)
  • File shares for files access via SMB (with caveats)
  • Table storage for storing unstructured JSON documents (Cosmos is a better choice if you need database type functionality)
  • Queue for creating message queues (although Service Bus is probably a better choice)

Using table storage is straight forward with Python, finding the documentation to do this in Python less so as search results point to a lot of out-of-date articles. So here is a quick run down and links.

The module you need is not azure-storage (which is now depreciated – that would be too obvious). Instead you should pip install azure-cosmosdb-table. Once installed you use the TableService constructor to connect to the storage account and query or perform CRUD operations on the table.

A row in table storage is referred to as an entity. You can have any fields you like but each entity must have a PartitionKey and RowKey. Combined the two form a unique key to access the document. If you are creating or updating an entity, your dictionary must contain these two fields. Hopefully the following example should help

from azure.cosmosdb.table.tableservice import TableService
ts = TableService(connection_string="UseDevelopmentStorage=true")

tables = [t.name for t in ts.list_tables()]
if "monty" not in tables:
    ts.create_table("monty")
    entity1 = {"PartitionKey":"Countries", "RowKey":"Britain", "Ruler":"King", "HowToBecome":"Strange women lying in ponds distributing swords"}
    entity2 = {"PartitionKey":"Countries", "RowKey":"Rome", "Ruler":"Emperor", "Benefit": "Better sanitation, medicine, education, wine, public order, irrigation, roads, fresh water system and public health"}
    ts.insert_entity("monty",entity1)
    ts.insert_or_merge_entity("monty",entity2)

for entity in ts.query_entities("monty"):
    if entity.Ruler == "King":
        print(entity.RowKey)

if 'entity1' in locals():
    ts.delete_table("monty")

In the above example, I connected to the Azure Storage Emulator (running locally). Replace the connection string with one from Azure to connect to a storage account.

Azure functions with Python

There has been a lot of hype over the last few years about serverless computing, an oxymoron as the code is definitely running on servers – you just stop caring as you don’t maintain them. Azure functions have finally matured enough to allow you write Python functions (at least with Python 3.6 to 3.8) without much effort. If you want some background information on Azure functions and where they fit check out this blog post.

It helps to have an walk through introduction, and Microsoft handily provides a step-by-step guide here. This assumes you have VS Code and necessary modules installed as it uses this to publish the function up to Azure. VS Code also allows you to run the function locally during deployment.

Your function app with consist of one or more functions which are a REST endpoint. Each endpoint is organised as a module – in its own folder named the same as the endpoint which executes the main function inside __init__.py by default. Also inside the folder is a function.json file which contains all the settings for the function. Notice one of these is scriptFile which allows you to change the name of the python file should you wish. Also by convention there is a readme.md file describing the function and a sample.dat containing a sample of the data passed to the function if it accepts POST, PUT or PATCH requests.

Notice that the Azure function is running inside a virtual environment when ran locally. I’ve covered virtuals environments before, but quickly you can enter the virtual environment from the command line with .venv\Scripts\activate (.bat for command prompt and .ps1 for PowerShell). Do not use this to add modules (that should be done using requirements.txt as normal) but this is a good way to test a bit of code.

You are likely to want to pass a few settings into the function. The easiest way for settings shared across all the functions is through app settings. Like web apps, these settings are passed in as environment variables, so can be read by os.environ; if you want to see all the environment variables (which includes any app settings) try changing the output text of HttpExample to:
", ".join(os.environ.keys())

When running locally, you can add app settings to the values object in the local.settings.json file (in the root). Oddly when running on the Azure servers, the setting is passed in twice, once with APPSETTING_ prefixed to the name (key) and again without the prefix. Remember when running under windows, the Python libray call forces environment variables into uppercase, but on Linux and other Posix systems the environment variable is case sensitive.

Using app settings for secrets like passwords or API keys is not a great idea. For better security you can store the value in a key vault and put a reference to this in as the app setting. See this post for details.

The above should be enough information to get a running function in Azure written in Python. I am looking at writing a website monitoring suite of functions (similar to Pingdom or StatusCake but using the requests module to interact with the website and ensure it is working correctly and not just checking a page loads) so no doubt there will be other posts on this soon.

Local Kubernetes

If you’ve used containers either for work or in your hobby you will have no doubt come across Kubernetes. It makes possible orchestration of hundreds and even thousands of containers on a cluster of machines. However if you are just starting and want to run a local test you don’t want to be setting up a local cluster at this point.

Another option would be a Kubernetes service from Azure, AWS or Google. Easy to set up but you may not want to pay for hosted services at this point. Or you want the retain the ability to test locally before checking the code where a CI/CD pipeline will put the container(s) into your running cloud service.

The typical answer in the past has been to set up minikube. How if you are running Windows Pro and have Hyper-V configured you may prefer the installing multipass and runing microk8s from Ubuntu instead.

You can download a Windows installer for multipass from the above link and there is a how to set up Kubernetes on Windows guide which covers getting started. Any VMs started by multipass can be viewed in Hyper-V Manager as normal.

One thing I didn’t like was the way it stored the VMs in C:\Windows\System32\config\systemprofile\AppData\Roaming by default with no option I could find to change in. I keep the VM images on a different drive for space reasons, not my SSD C: drive. Thankfully Windows has been able to create hard links since Vista.

If like me you want to move the images to a separate drive, open up C:\Windows\System32\config (you mayneed to grant yourself access to this folder) then the systemprofile folder (again you may need to grant yourself access). Continue to AppData \ Roaming \ multipass.

Move (cut and paste) the vault folder to another drive; I will use I: as an example. Then open up a command prompt as administrator (mklink is not available in powershell) and enter the following two commands

cd C:\Windows\System32\config\systemprofile\AppData\Roaming\multipassd
mklink /j vault I:\vault

That’s it; images moved to another drive with more space and you are ready to use multipass for running Ubuntu images on Windows.

If you have installed Docker Desktop for Windows (and I recommend you do) then this will install a version of kubectl for interacting with your newly built cluster. You can confirm it is installed and in your path by typing kubectl version.

While you should have got a version for the client; you will probably get an error saying it cannot talk to the server. That’s because the server is not running on the localhost but a separate VM. If you type kubectl config view you should see no clusters are configured.

The instructions on getting the configuration (and status) on the Ubuntu page is incorrect; microk8s is an executable not a set of files. To get a configuration file locally on your machine enter the command below: (it’s a single command that should be entered on one line)

multipass exec microk8s-vm -- sudo /snap/bin/microk8s config > microk8sconfig

You can now verify kubectl can access the server with the following command which should now return both the client and the server version

kubectl --kubeconfig=microk8sconfig version

Web hooks from GitHub repo

Following on from an earlier post on the GitHub API, I had the requirement to add web hooks to a group of repos so that all pull requests raised, completed or closed would be posted to the web hook. All the repos have a common prefix and the PyGithub library provides a doc on how to create a web hook so just need to put the two together.

As I want this to be re-runnable, I first need to check there is not already a web hook. For this I created the function has_webhook which iterates through the existing hooks, using the get_hooks function, looking to see if any reference the passed in URL.

With that done I just need to add a little bit of additional code to my githubrepos.py template. First an option to the parameters so I can specify the web hook. Next I need a hook config dictionary for creating any new hooks and a list of events. As this is the same for each, I’ll do this at the start and for the sake of simplicity hardcode the list . Finally for my repo loop I need to call has_webhook function and for any which do not already have the web hook, create it using the create_hook function.

This gives me the final webhooks.py.

As a final note. If you are looking to integration Github into Slack or MS Teams, both offer apps which largely automate the process. Ultimately what they are doing is setting up a webhook in the same way as above.