How Python Is Becoming Faster?
in Programming
Compared to new programming languages like Go, Python is a relatively slow high-level programming language that makes the execution of its programs take more time. At the end of this article, you should have understood why it is, how you can speed things up, and what the core creators of the language are doing to improve the runtime speed.
Even though Python is a dynamic, easy-to-learn language with simple syntax, it is relatively slow when compared to languages like Java, Go, C and C++.
A comparison of popular frameworks built with Python (Django) and languages like Go (Gin) shows that the Go framework runs more requests per second (114 963) than Django (8 945); this shows that Go is about 12 times faster than Python.
The fundamental reason behind this slow execution of Python code is because it is a dynamically written language.
Java, for example, is a statically typed language that runs all necessary checks and compiles the code before runtime; this optimizes the program and makes it run faster.
On the other hand, Python is a language that is compiled at run time because, as a dynamically written language, any variable type or value can change while the program is running. For this reason, Python code cannot be compiled beforehand, and because of that, the code cannot be optimized at runtime as we have in low-level languages like Java or C
Another reason behind the slow execution of Python compared to languages like Java or C is because Python is an interpreted language - while Java is a machine language.
An interpreted language like Python needs to be converted to a machine language before its execution; this compilation process also explains another reason python is slow.
Even if Python is slower, it has a simple syntax and a large number of libraries and contributors. This can partly explain why it is being used in many common and less common fields like GPU-dedicated tasks like machine learning and artificial intelligence.
So, is there no way to make Python faster?
There's a couple of ways you can go about making your Python program faster using:
Multi-processing module
Python is a language that doesn’t allow concurrency through multithreading. Multithreading enables different program portions to run on separate CPU cores simultaneously, making the program run faster.
Python has a global interpreter lock which disallows it from multithreading; this is because python is a dynamic language that compiles at runtime; therefore, running multiple portions of a Python program at the same time will make the code encounter some issues.
The Python multiprocessing module bypasses the global interpreter lock and allows you to have multiple interpreters running concurrently to make your Python program execution faster. However, you may run into issues with shared and locking memory.
C extensions
Other than the multiprocessing module, writing a C extension for your Python code can significantly improve the run time of your program.
The default Python implementation, CPython, is natively written in C.
Because of this, you can write C code as an extension to your python code.
C is a fast low-level language that will help make your Python program run faster.
But, both of these methods also have their downsides. Running the multiprocess module effectively is hard to achieve because of memory sharing and locking while using C extensions; you need to know the C programming language.
However, all these are about to change
The creator of the Python language, Guido Van Rossum, unveiled plans for making Python faster in contribution to the 2021 virtual Python Language Summit held in May.
To improve the recently released Python 3.10, the plan to speed up Python to up to 2x will begin with Python 3.11.
In the published presentation, Van Rossum explained that the Python performance improvement project is handled by a "small team funded by Microsoft" as part of Microsoft’s way of giving back to the Python community.
He assured that the team would also take care of maintenance and support, and the project will be open source.
According to the plan, Van Rossum indicated that the project of speeding up Python would be sequential, targeting 2x in Python 3.11 and up to 5x speed in subsequent Python releases.
But will the team be able to achieve this speed?
Van Rossum stated that they are far from certain that they will reach 2x, but they are "optimistic and curious".
He gave some constraining factors they must work by because achieving 2x speed in version 3.11 might be hard.
He declared that any changes and improvements to Python must not break the stability of application binary interface (ABI) compatibility; It must not break the limited API compatibility. It must not break or slow down extreme cases. He added that the modification must "keep the code maintainable."
How do they plan on achieving 2x speed in Python 3.11?
Within the constraining factors, Rossum and the Python improvement project team identified some aspects they freely change to reach a 2x speed in version 3.11.
Since the bytecode compiler and interpreter are components that change in each release version, it is a great candidate for speed optimization without breaking anything.
He proposed introducing "Adaptive, specializing bytecode interpreter"; which will use bytecode instructions specific to a particular data type to execute that portion of the code. This will act like an inline cache which will speed up the execution.
Other proposed speed optimization for Python 3.11 includes optimizing the frame stack, proving the speed of function calls, changing pyc file format, and enacting a more efficient exception handling.
Depending on the speed improvements the team can achieve in Python 3.11, Rossum declared that achieving 5x speed may be possible, but "we'll have to be creative."
Is there an efficient way to Improve Python speed before Python 3.11?
Python 3.11, which is proposed to come with improvements that will make Python faster, will not be released until 2022. Before then, we need to find a way to speed up our Python programs.
To speed your Python programs, we can implement the Python multiprocessing modules or use C code as a Python extension, as explained earlier.
You can also use a JIT compiler such as Numba if you're using NumPy.
Numba is a just-in-time JIT compiler that uses decorators to convert Python and NumPy codes to machine code. It works by compiling Python codes, replacing the Python interpreter, then making Python functions compile directly to machine code, thereby improving the speed at run time.
Numba translates Python functions to optimized machine code at runtime using the industry-standard LLVM compiler library.
When using Numba, you don't need to replace the Python interpreter, run a separate compilation step, or even have a C/C++ compiler installed. You just need to apply one of the Numba decorators to your Python function, and Numba does the rest.
from numba import jit
import random
@jit(nopython=True)
def monte_carlo_pi(nsamples):
acc = 0
for i in range(nsamples):
x = random.random()
y = random.random()
if (x ** 2 + y ** 2) < 1.0:
acc += 1
return 4.0 * acc / nsamples
API developers may also use FastAPI, a modern, fast (high-performance), a web framework for building APIs with Python 3.6+ based on standard Python type hints. TechEmpower benchmarks show FastAPI applications running under Uvicorn as one of the fastest Python frameworks available.
If you are familiar with Flask, learning FastAPI will not take too much of your time:
from typing import Optional
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def read_root():
return {"Hello": "World"}
@app.get("/items/{item_id}")
def read_item(item_id: int, q: Optional[str] = None):
return {"item_id": item_id, "q": q}
For non API development, if you want to swap the popular Django or Flask in some of your time-sensitive workloads you may consider less used frameworks like Falcon, Bottle and apidaora. According to Web Frameworks Benchmark project, Falcon (3), Bottle (0.12) and apidaora (0.28) are the top 3 fastest Python frameworks for Python 3.9.
Get similar stories in your inbox weekly, for free
Share this story:
The Chief I/O
The team behind this website. We help IT leaders, decision-makers and IT professionals understand topics like Distributed Computing, AIOps & Cloud Native
Latest stories
How ManageEngine Applications Manager Can Help Overcome Challenges In Kubernetes Monitoring
We tested ManageEngine Applications Manager to monitor different Kubernetes clusters. This post shares our review …
AIOps with Site24x7: Maximizing Efficiency at an Affordable Cost
In this post we'll dive deep into integrating AIOps in your business suing Site24x7 to …
A Review of Zoho ManageEngine
Zoho Corp., formerly known as AdventNet Inc., has established itself as a major player in …
Should I learn Java in 2023? A Practical Guide
Java is one of the most widely used programming languages in the world. It has …
The fastest way to ramp up on DevOps
You probably have been thinking of moving to DevOps or learning DevOps as a beginner. …
Why You Need a Blockchain Node Provider
In this article, we briefly cover the concept of blockchain nodes provider and explain why …
Top 5 Virtual desktop Provides in 2022
Here are the top 5 virtual desktop providers who offer a range of benefits such …
Why Your Business Should Connect Directly To Your Cloud
Today, companies make the most use of cloud technology regardless of their size and sector. …
7 Must-Watch DevSecOps Videos
Security is a crucial part of application development and DevSecOps makes it easy and continuous.The …