It's been roughly a decade since I first became romantically involved with Python. This particular romantic comedy somehow missed the "love at first sight" trope entirely - I found the process of learning Python infuriating. My apprehensions weren't with the language itself, but rather with every living software developer on the face of the earth.
From my standpoint, it seemed like people in "software" consisted of only two archetypes, neither of which were particularly savory. One end of the spectrum was occupied by my fellow noobs, who seemed happily constrained to programming in cutesy REPL environments provided by whichever bootcamp provided them the blue pill which kept them from asking questions. On the opposite end of the spectrum was everybody else: competent engineers who may as well had been geniuses in my eyes. I knew these people had the knowledge to answer every question I could muster but somehow proved to be utterly incapable of being useful in any capacity. It wasn't clear to me at the time whether seasoned engineers were intentionally elitist assholes, or if newcomers were genuinely helpless. The only clarity I had was that neither demographic was going to provide value to my immediate goals. In retrospect, that frustration is likely what propelled this blog into existence).
I was convinced there was nothing worth building unless it were running on a Linux server, behind a real domain, accessible to the world. That might sound like a reasonable viewpoint in 2020, but this was 2010: there was no Docker, no Heroku, no DigitalOcean. Nginx was mostly a fringe webserver created by "some dude in Russia" powering 10% of sites on the internet compared to Apache's 90%. Web-based Python had previously relied on an Apache module called mod-python. Mod-python was suddenly (and arbitrarily?) deemed "dead" in favor of an undocumented Apache module by the name of mod-wsgi. It was the brainchild of Graham Dumpleton, who may as well have been the only person in the world besides myself attempting to run a god damn Python web app. It's a wonder I ever managed to succeed in doing that at all.
The acronym "WSGI" stands for Web Server Gateway Interface, which is an esoteric way of saying "how a webserver communicates with Python." uWSGI and its predecessors are a form of middleware for webservers like Nginx to serve Python apps.
There are plenty of options for serving Python web apps in 2020, but uWSGI is objectively better than alternatives like Gunicorn. If you have any doubts about why we're rolling with uWSGI, these cherry-picked charts I stole from this guy's blog will surely clear things up:
Before we get started, I'm going to be real with you: this stuff can feel obnoxiously esoteric at times. It's tempting to look at this process and say "fuck it" when there are one-click solutions to avoid all of this altogether, but I'll let you in on a secret: those solutions suck. Heroku is a shitty AWS reseller that sells trash EC2 instances with a friendlier interface. Docker is a cop-out for developers to dodge the nuances of Linux, to the point where we'd instead install entire VMs worth of overhead over learning. I'm prepared to be shat on for those remarks, but my point remains: if you're interested in rising above vendor-lock and building apps that run faster, you've come to the right place.
Getting Set Up
We need to install a bunch of Python dev packages on Ubuntu (or whatever) for uWSGI to work. Even if this line looks familiar, do not skip this part (like I did). There's almost certainly at least one package below you're missing:
Installing uwsgi-plugin-python3 is an important step that deserves some extra attention. uWSGI is traditionally a Python package, so you'd possibly expect us to run
pip install uWSGI at some point. On the contrary, if we were to install uWSGI via pip, uWSGI would be a Python package belonging to whichever system default version of Python3 happens to be installed on our machine. Our project is likely going to use a version of Python other than Python 3.6.9 (The Ubuntu 18.04 default). Thus we need a version of uWSGI that transcends Python versions. This is where uwsgi-plugin-python3 comes in:
Our last bit of Ubuntu configuration is to open port 5000:
Prep Your Project
Next we just need to make sure your Flask app is on your remote server and ready for action. Clone your project onto your VPS and make sure your project has a proper wsgi.py file to serve as the app's entry point:
You already know to use virtual environments without me telling you, but we need to specifically use virtualenv here - NOT Pipenv or any other alternative. uWSGI is picky about this for some reason. Save yourself the trouble and use virtualenv, even though it kinda sucks on principle:
We've installed all dependencies, installed uWSGI, and our project is looking good... we should be ready to test this thing out, right?
You may be able to discern what's happening above; the output of uWSGI states we're using Python 3.6.9 (not what we want) and can't find the packages associated with our activated virtual environment. When we specified
--plugin python3 in the line before, we were too general: we need a uWSGI plugin specifically for our version, which is called python38.
Install uWSGI Python 3.8 Plugin
Thanks to the uwsgi-plugin-python3 library we installed earlier, installing version-specific uWSGI plugins is easy:
You should see output like this:
This downloads a file called python38_plugin.so to your current folder. We need to move this to where it belongs, and set some permissions:
Let's Try That Again
This time around we're going to specify
--plugin python38 to specifically run uWSGI with Python 3.8. We're also going to add another flag called
--virtualenv, which defines the path at which our Python libraries are installed. Kill the previous uWSGI process and give it another go:
Here we go...
This is good news! Visit your server's IP address at port 5000 and report back. Is it working? IT'S WORKING! NICE!!!
Something awesome about uWSGI is how easy it is to utilize multiple cores in our machine by specifying how many threads and processes we want to use. If your machine is equipped with multiple CPU cores, here's how easy it is to utilize them:
This will output each worker created as well as the cores utilized:
Keep in mind that uWSGI processes don't get killed by simply Control+Cing in the terminal. Remember to kill unwanted uWSGI processes by using
pkill -9 uwsgi.
Running uWSGI via Config File
We've proven that we can serve our app via the uWSGI CLI, but we want our app to persist forever behind a registered domain name. We need a way for Nginx to hook into our uWSGI process with all the flags we passed via the CLI (such as the uWSGI plugin to use, our virtual environment location, etc.). Luckily, we can save the flags/values we passed into the CLI to an .ini file with the same naming convention:
Instead of specifying
http-socket here, we set
socket to myapp.sock. Nginx is going to handle our HTTP requests, but it needs a way to associate incoming requests to our running application. We handle this by creating a socket: every time we run uWSGI with this configuration, a file is created in our project directory called myapp.sock. All Nginx needs to worry about is pointing to this socket for incoming traffic.
Now we can run our app with the proper configuration efficiently:
That's much better. As an added bonus, the presence of
die-on-term = true in our config means that our uWSGI process will end when we Control+C, for convenience's sake.
uWSGI & Nginx 4 Eva
Assuming you have Nginx installed, create a config for our app in sites-available:
This might be one of the simplest Nginx configs you'll ever have to create. Listen for your domain on port 80 and forward this traffic (with parameters) to the location of the socket file we specified in myapp.ini:
Note the triple slashes in the
Let's symlink this config to sites-enabled:
Restart Nginx to take effect:
Running Your App
You're one command away from exposing deploying your app to the world forever. Before you pull the trigger, take pride in the fact that you've made it here. The entire cloud computing industry has profited massively from the paradigm of "serverless" microservices because most people are not like you. Most people would rather pay fees for shared resources at the expense of performance to dodge reading a tutorial like this one. You are liberated. You are beautiful. You are Batman.
nohup is a unix command which allows processes to persist in the background until they're explicitly killed (or your machine turns off). Running
nohup uwsgi myapp.ini & (note the trailing ampersand) will spin up a uWSGI process that stays alive while you got about your business.
nohup is a quick and dirty way to get your app running "forever," as long as your definition of "forever" doesnt account for fatal events that would kill your Python app, such as an unhandled error or a power outage. To ensure your app is truly immortal, I highly recommend creating a systemd service.
Create a Service
Systemd is a Linux "service manager" to configure and run process daemons, which is a cool term for "processes that run in the background. Remember when we did
service nginx restart like 30 seconds ago?
nginx is an example of a daemon process - it's a constant process listening for incoming connections. We're going to make myapp a service too, so Nginx always has something to direct traffic to.
The syntax for systemd service configurations follows the .ini file format:
User: Tells our service to run our app as the Ubuntu root user.
WorkingDirectory: The working directory that we'll be serving our app from.
Environment: Our project's Python virtual environment.
ExecStart: This is the most important part of our configuration: a shell command to actually start our application. Each time we "start" or "restart" our service, this is the command being executed.
RestartSec: These two values are telling our service to check if our app is running every 10 seconds. If the app happens to be down, our service will restart the app, hence the on-failure value for Restart.
Now give it a go:
start will silently execute your newly created service. To see if everything succeeded, use
Manage uWSGI Apps in Emperor Mode
An alternative to creating a service per application is using uWSGI's "Emperor" mode. Emperor mode enables us to manage all uWSGI apps globally, similar to the way Nginx manages hosts. This is probably overkill for most people, but if you're weird, stick around.
Change directories to
cd /etc/uwsgi and check out what's inside:
- /apps-available: A global folder to hold all your uwsgi .ini files (like the one we created earlier). This is the uWSGI equivalent of Nginx's sites-available folder.
- /apps-enabled: Just like Nginx, this folder expects symbolic links from its apps-available counterpart. Running uWSGI in emperor mode will look for all config files in this folder and run them accordingly.
Let's run our app in emperor mode! Copy your config to apps-available and symlink it to apps-enabled:
Now give it a whirl:
Start on Machine Start-up
The best part about running uWSGI in emperor mode is we can have our apps launch upon machine startups without writing any services. Add a file called /etc/rc.local and include the following:
That's all, folks!
PS: Having run through this, setting up uWSGI suddenly doesn't seem so convoluted after all. It all seems obvious now, but in reality, it took me multiple failed starts over the course of a year to get uWSGI working. Try reading any other uWSGI tutorial and you'll quickly see why: engineers are still apparently god awful at explaining these concepts. I don't make these generalizations lightly, but hey, more traffic for me.
Rant over, tutorial over. Until next time.