HOME | PRODUCTS | SERVICES
|
PARTNERS
|
SUPPORT | ABOUT | CONTACT
US |
|
|
|
Get Blazing Fast Python Performance! |
|
|
|
|
Whether you are a seasoned high-performance developer or a
data scientist looking to speed up your workflows, the
Intel Distribution for Python powered by Anaconda
delivers an easy-to-install, performance-optimized Python
experience to meet even your most demanding requirements.
The all-included, out-of-the box distribution accelerates core
Python packages including NumPy, SciPy, pandas, scikit-learn,
Jupyter, matplotlib, and mpi4py. |
|
|
|
|
High-Performance Tools for Python
Supercharge all your Python applications on modern Intel
platforms with Intel® Distribution for Python*.
The FREE Intel® Distribution for Python powered by Anaconda
can be used as a drop-in replacement for your current Python
environment to get high performance out of the box. Your
Python applications immediately gain significant performance
and can further be tuned to extract every last bit of
performance using the Intel® VTune™ Amplifier. |
|
|
|
|
|
Intel® VTune™ Amplifier XE for
performance profiling of Python/C/C++
Available as part of the Intel® Parallel Studio XE 2017
Cluster Edition, this performance analyzer accurately
identifies obvious and non-obvious performance bottlenecks in
your Python code with native extensions, with low overheads
and line-level detail.
It provides efficient profiling techniques can help
dramatically improving the performance of your Python* code by
detecting time, CPU, and memory bottlenecks. |
|
|
|
Learn More in a Free October 4 Webinar |
|
This webinar
highlights the significant performance speed-ups achieved by
implementing multiple Intel tools and techniques for high
performance Python on collaborative filtering methods
benchmarked on the latest Intel® platforms.
A combination of performance profiling with Intel® VTune™
Amplifier XE, accelerated machine learning algorithms in
Intel® Data Analytics Acceleration Library and Intel®
Distribution for Python*, and enhanced thread scheduling,
showcase the individual strengths and combined computation
power to drive performance on large scale machine learning
workloads. |
|
|
|