Scripts Deep Dive Suggest edits
In this section we’ll take a more in-depth view at scripts and how they can be used most effectively for CloudShell orchestration.
How CloudShell handles scripts
CloudShell executes a Python script in a very simple and straightforward way by simply running it with a Python executable. To send information to the script, CloudShell sets environment variables in the scope of the script process. These environment variables include information about the sandbox reservation, as well as the script parameters. The script standard output is returned as the command result. If an exception is raised, or if a non-zero process result code is returned by the script, the execution will be considered a failure.
Using a main function and packaging multiple files
As scripts become more complex, instead of structuring them as one big function, it is advisable to create a main function and separate the rest of the logic to different functions. Python requires including some boilerplate code in addition to the main function to make this work. Here is some example code demonstrating how to use main functions with scripts:
As you’re writing more complex orchestration scripts, it may become prudent to also separate the code to multiple files. To do that, we can take advantage of Python’s ability to support executing .zip archives containing multiple scripts. The only requirement, is that one of the files is named ___main__.py, which is how the entry point of the Python process is determined.
Referencing other packages
As opposed to Shell drivers, CloudShell doesn’t look for a requirement.txt file for scripts and doesn’t attempt to retrieve dependencies from Pypi. The script dependencies must be installed on the Python used by the Execution Server.
On windows machines, the ES will by default use the Python included with the ES installation. It can be found in the \Python\2.7.10 directory under the ES installation folder. If you’re using Linux, the Execution Server will use the default Python configured in the os. In both cases it is possible, however, to specify a different Python environment. To do so, add the following key to the execution server customer.config file:
To install a dependency, run the following using the Python executable referenced by the ES:
As dependencies can get complex, it is recommended to keep a centralized requirements.txt file where you can catalog the requirements of all of the orchestration scripts and add new ones if needed. This will both make it easier to keep track of the dependencies used by the orchestration scrips and avoid version conflicts, and make it easier to deploy new Execution Servers. Instead of installing each dependency independently you’ll then be able to run:
Setup and teardown scripts
Setup and teardown are a special types of orchestration scripts. There are two things that make them special:
- They can’t have any inputs as they are being launched automatically
- If you use the default ‘Python Setup & Teardown’ driver, then simply including a teardown or setup script in the reservation and setting a duration for the setup/teardown is enough for CloudShell to launch it.
To set a script as a teardown or setup script, you need to edit it from the script management page. One of the fields allows you to select the Script Type. By choosing ‘Setup/Teardown’ the script will take on that special behavior. Notice that you’ll not be able to run it separately from the environment built in setup and teardown commands and you won’t be able to add any inputs to it.
Debugging scripts
CloudShell includes some helper functions to make it easier to debug a script by running it on real sandbox reservation data. The helper functions allow the script to “attach” to a CloudShell Sandbox, by filling in all of the environment variables of the script so that the same information is available to it as would be if CloudShell launched it.
To attach to a CloudShell sandbox, first create a sandbox reservation, then add the following code and fill in the required data for the function parameters.
If we include the above code in the example script we provided earlier, we’ll be able to run it locally as well as from the CloudShell sandbox. The attach_to_cloudshell_as function will populate all of the blueprint data as CloudShell would so from the code perspective it doesn’t make a different where its being run from. Furthermore, the code will ignore the special attach_to_cloudshell_as function if you run it from CloudShell so that there is no adverse effect to leaving the statement there.
One drawback of using this strategy is that its probably not a good idea to leave your CloudShell credentials in the code itself in plain sight. That is why we recommend you use a similar function which takes the same information from a file. Make sure to add that file to the .gitignore list so that it doesn’t get on source control of course. The following code will have the same effect as the lines above, only it will look for the information in a JSON file named quali_config.json which should be in the project root.
The quali_config.json should have the following structure: