There are plenty of tutorials that cover the basic measures you need to take to secure a Linux server, including my own. What usually falls outside the scope of these tutorials is what other steps are recommended for a server that is going to be accessed by multiple people, such as a group of developers all working together as a team. Group access to a server introduces some challenges, as you will need to implement procedures to grant and revoke access as team members come and go, and do so without any compromises on security.
This is the eighteenth installment of the Flask Mega-Tutorial series, in which I'm going to deploy Microblog to the Heroku cloud platform.
This is the seventeenth installment of the Flask Mega-Tutorial series, in which I'm going to deploy Microblog to a Linux server.
If you are like me, you were starting to get comfortable with the idea of deploying your applications to cloud instances such as EC2s on AWS or droplets on DigitalOcean, when people started to shift away from cloud instances and embrace containers. Maybe now you are getting into containers and Docker, and as is to be expected, the tech world is making a move once again, this time to severless computing. Impossible to ever catch up, right?
In this article I'm going to tell you what a serverless architecture can offer you that the more traditional approaches cannot (and more specifically how it is possible to run your Python web applications without a server!). At the time I'm writing this, AWS has by far the most mature serverless platform, with the Lambda, API Gateway and DynamoDB triad of services at the forefront, so this is the platform I'm going to concentrate on.
Highly distributed applications that consist of lots of small services talking among themselves are getting more and more popular, and that, in my opinion, is a good thing. But this architectural style brings with it a new class of problems that are less common in monolithic applications. Consider what happens when a service needs to send a request to another service, and this second service happens to be temporarily offline, or too busy to respond. If one little service goes offline at the wrong time, that can create a domino effect that can, potentially, take your entire application down.
In this article I'm going to show you techniques that can give your application some degree of tolerance for failures in dependent services. The basic concept is simple: we make the assumption that in most cases these failures are transient, so then when an operation fails, we just repeat it a few times, until it hopefully succeeds. Sounds easy, right? But as with most things, the devil is in the details, so keep reading if you want to learn how to implement a robust retry strategy.
Most of you know by now that not too long ago I joined Rackspace. As you can imagine, I am now learning tons of new things as I familiarize myself with all the OpenStack projects, none of which I have used before.
In this article I'm going to show you a few ways to work more efficiently with your Rackspace cloud account (or any OpenStack cloud for that matter). I will begin with the introduction of a command line tool that you can use to manage your cloud servers, and then go even lower level and show you how you can do the same thing using a Python SDK. To end this article I'm going to show you a complete script that creates a cloud server, configures it as a Web server and deploys a Flask application to it, all completely unattended.
(Great news! There is a new version of this tutorial!)