Hi Guys!
Today, I’ll be showing another post on how one can drastically improve the performance of a python code. Last time, we took advantage of vector computing by using GPU-based computation. This time we’ll explore PyPy (the new just in time compiler, while Python is the interpreter).
What is PyPy?
According to the standard description available over the net ->
PyPy is a very compliant Python interpreter that is a worthy alternative to CPython. By installing and running your application with it, you can gain noticeable speed improvements. How much of an improvement you’ll see depends on the application you’re running.
What is JIT (Just-In Time) compiler?
A compiled programming language always faster in execution as it generates the bytecode based on the CPU architecture & OS. However, they are challenging to port into another system. Example: C, C++ etc.
Interpreted languages are easy to port into a new system. However, they lack performance. Example: Perl, Matlab, etc.
However, python falls between the two. Hence, it performs better than purely interpreted languages. But, indeed not as good as compiler-driven language.
There is a new Just in time compiler comes, which takes advantage of both the world. It identifies the repeatable code & converts those chunks into machine learning code for optimum performance.
To prepare the environment, you need to install the following in MAC (I’m using MacBook) –
brew install pypy3
Let’s revisit our code.
Step 1: largeCompute.py (The main script, which will participate in a performance for both the interpreter):
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
############################################## | |
#### Written By: SATYAKI DE #### | |
#### Written On: 06-May-2021 #### | |
#### #### | |
#### Objective: Main calling scripts for #### | |
#### normal execution. #### | |
############################################## | |
from timeit import default_timer as timer | |
def vecCompute(sizeNum): | |
try: | |
total = 0 | |
for i in range(1, sizeNum): | |
for j in range(1, sizeNum): | |
total += i + j | |
return total | |
except Excception as e: | |
x = str(e) | |
print('Error: ', x) | |
return 0 | |
def main(): | |
start = timer() | |
totalM = 0 | |
totalM = vecCompute(100000) | |
print('The result is : ' + str(totalM)) | |
duration = timer() – start | |
print('It took ' + str(duration) +' seconds to compute') | |
if __name__ == '__main__': | |
main() |
Key snippets from the above script –
for i in range(1, sizeNum): for j in range(1, sizeNum): total += i + j
vecCompute function calculates 100000 * 100000 or any new supplied number to process the value (I = I + J) of each iteration.
Let’s see how it performs.
To run the commands in pypy you need to use the following command –
pypy largeCompute.py
or, You have to mention the specific path as follows –
/Users/satyaki_de/Desktop/pypy3.7-v7.3.4-osx64/bin/pypy largeCompute.py

As you can see there is a significant performance improvement i.e. (352.079 / 14.503) = 24.276. So, I can clearly say 24 times faster than using the standard python interpreter. This is as good as C++ code.
Where not to use?
PyPy works best with the pure python-driven applications. It can’t work with the Python or any C extension in python. Hence, you won’t get that benefits. However, I have a strong believe that one day we may use this for most of our use cases.
For more information, please visit this link. So, this is another shortest yet effective post. 🙂
So, finally, we have done it.
I’ll bring some more exciting topic in the coming days from the Python verse.
Till then, Happy Avenging! 😀
Note: All the data & scenario posted here are representational data & scenarios & available over the internet & for educational purpose only.
You must be logged in to post a comment.