The PyO3 user guide
Welcome to the PyO3 user guide! This book is a companion to PyO3's API docs. It contains examples and documentation to explain all of PyO3's use cases in detail.
Please choose from the chapters on the left to jump to individual topics, or continue below to start with PyO3's README.
PyO3
Rust bindings for Python, including tools for creating native Python extension modules. Running and interacting with Python code from a Rust binary is also supported.
Usage
PyO3 supports the following software versions:
- Python 3.7 and up (CPython and PyPy)
- Rust 1.48 and up
You can use PyO3 to write a native Python module in Rust, or to embed Python in a Rust binary. The following sections explain each of these in turn.
Using Rust from Python
PyO3 can be used to generate a native Python module. The easiest way to try this out for the first time is to use maturin
. maturin
is a tool for building and publishing Rust-based Python packages with minimal configuration. The following steps install maturin
, use it to generate and build a new Python package, and then launch Python to import and execute a function from the package.
First, follow the commands below to create a new directory containing a new Python virtualenv
, and install maturin
into the virtualenv using Python's package manager, pip
:
# (replace string_sum with the desired package name)
$ mkdir string_sum
$ cd string_sum
$ python -m venv .env
$ source .env/bin/activate
$ pip install maturin
Still inside this string_sum
directory, now run maturin init
. This will generate the new package source. When given the choice of bindings to use, select pyo3 bindings:
$ maturin init
✔ 🤷 What kind of bindings to use? · pyo3
✨ Done! New project created string_sum
The most important files generated by this command are Cargo.toml
and lib.rs
, which will look roughly like the following:
Cargo.toml
[package]
name = "string_sum"
version = "0.1.0"
edition = "2018"
[lib]
# The name of the native library. This is the name which will be used in Python to import the
# library (i.e. `import string_sum`). If you change this, you must also change the name of the
# `#[pymodule]` in `src/lib.rs`.
name = "string_sum"
# "cdylib" is necessary to produce a shared library for Python to import from.
#
# Downstream Rust code (including code in `bin/`, `examples/`, and `tests/`) will not be able
# to `use string_sum;` unless the "rlib" or "lib" crate type is also included, e.g.:
# crate-type = ["cdylib", "rlib"]
crate-type = ["cdylib"]
[dependencies]
pyo3 = { version = "0.17.3", features = ["extension-module"] }
src/lib.rs
#![allow(unused)] fn main() { use pyo3::prelude::*; /// Formats the sum of two numbers as string. #[pyfunction] fn sum_as_string(a: usize, b: usize) -> PyResult<String> { Ok((a + b).to_string()) } /// A Python module implemented in Rust. The name of this function must match /// the `lib.name` setting in the `Cargo.toml`, else Python will not be able to /// import the module. #[pymodule] fn string_sum(_py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(sum_as_string, m)?)?; Ok(()) } }
Finally, run maturin develop
. This will build the package and install it into the Python virtualenv previously created and activated. The package is then ready to be used from python
:
$ maturin develop
# lots of progress output as maturin runs the compilation...
$ python
>>> import string_sum
>>> string_sum.sum_as_string(5, 20)
'25'
To make changes to the package, just edit the Rust source code and then re-run maturin develop
to recompile.
To run this all as a single copy-and-paste, use the bash script below (replace string_sum
in the first command with the desired package name):
mkdir string_sum && cd "$_"
python -m venv .env
source .env/bin/activate
pip install maturin
maturin init --bindings pyo3
maturin develop
If you want to be able to run cargo test
or use this project in a Cargo workspace and are running into linker issues, there are some workarounds in the FAQ.
As well as with maturin
, it is possible to build using setuptools-rust
or manually. Both offer more flexibility than maturin
but require more configuration to get started.
Using Python from Rust
To embed Python into a Rust binary, you need to ensure that your Python installation contains a shared library. The following steps demonstrate how to ensure this (for Ubuntu), and then give some example code which runs an embedded Python interpreter.
To install the Python shared library on Ubuntu:
sudo apt install python3-dev
Start a new project with cargo new
and add pyo3
to the Cargo.toml
like this:
[dependencies.pyo3]
version = "0.17.3"
features = ["auto-initialize"]
Example program displaying the value of sys.version
and the current user name:
use pyo3::prelude::*; use pyo3::types::IntoPyDict; fn main() -> PyResult<()> { Python::with_gil(|py| { let sys = py.import("sys")?; let version: String = sys.getattr("version")?.extract()?; let locals = [("os", py.import("os")?)].into_py_dict(py); let code = "os.getenv('USER') or os.getenv('USERNAME') or 'Unknown'"; let user: String = py.eval(code, None, Some(&locals))?.extract()?; println!("Hello {}, I'm Python {}", user, version); Ok(()) }) }
The guide has a section with lots of examples about this topic.
Tools and libraries
- maturin Build and publish crates with pyo3, rust-cpython or cffi bindings as well as rust binaries as python packages
- setuptools-rust Setuptools plugin for Rust support.
- pyo3-built Simple macro to expose metadata obtained with the
built
crate as aPyDict
- rust-numpy Rust binding of NumPy C-API
- dict-derive Derive FromPyObject to automatically transform Python dicts into Rust structs
- pyo3-log Bridge from Rust to Python logging
- pythonize Serde serializer for converting Rust objects to JSON-compatible Python objects
- pyo3-asyncio Utilities for working with Python's Asyncio library and async functions
- rustimport Directly import Rust files or crates from Python, without manual compilation step. Provides pyo3 integration by default and generates pyo3 binding code automatically.
Examples
- hyperjson A hyper-fast Python module for reading/writing JSON data using Rust's serde-json
- html-py-ever Using html5ever through kuchiki to speed up html parsing and css-selecting.
- point-process High level API for pointprocesses as a Python library
- autopy A simple, cross-platform GUI automation library for Python and Rust.
- Contains an example of building wheels on TravisCI and appveyor using cibuildwheel
- orjson Fast Python JSON library
- inline-python Inline Python code directly in your Rust code
- Rogue-Gym Customizable rogue-like game for AI experiments
- Contains an example of building wheels on Azure Pipelines
- fastuuid Python bindings to Rust's UUID library
- wasmer-python Python library to run WebAssembly binaries
- mocpy Astronomical Python library offering data structures for describing any arbitrary coverage regions on the unit sphere
- tokenizers Python bindings to the Hugging Face tokenizers (NLP) written in Rust
- pyre Fast Python HTTP server written in Rust
- jsonschema-rs Fast JSON Schema validation library
- css-inline CSS inlining for Python implemented in Rust
- cryptography Python cryptography library with some functionality in Rust
- polaroid Hyper Fast and safe image manipulation library for Python written in Rust
- ormsgpack Fast Python msgpack library
- bed-reader Read and write the PLINK BED format, simply and efficiently
- Shows Rayon/ndarray::parallel (including capturing errors, controlling thread num), Python types to Rust generics, Github Actions
- pyheck Fast case conversion library, built by wrapping heck
- Quite easy to follow as there's not much code.
- polars Fast multi-threaded DataFrame library in Rust | Python | Node.js
- rust-python-coverage Example PyO3 project with automated test coverage for Rust and Python
- forust A lightweight gradient boosted decision tree library written in Rust.
- ril-py A performant and high-level image processing library for Python written in Rust
- fastbloom A fast bloom filter | counting bloom filter implemented by Rust for Rust and Python!
- river Online machine learning in python, the computationally heavy statistics algorithms are implemented in Rust
- feos Lightning fast thermodynamic modeling in Rust with fully developed Python interface
Articles and other media
- Nine Rules for Writing Python Extensions in Rust - Dec 31, 2021
- Calling Rust from Python using PyO3 - Nov 18, 2021
- davidhewitt's 2021 talk at Rust Manchester meetup - Aug 19, 2021
- Incrementally porting a small Python project to Rust - Apr 29, 2021
- Vortexa - Integrating Rust into Python - Apr 12, 2021
- Writing and publishing a Python module in Rust - Aug 2, 2020
Contributing
Everyone is welcomed to contribute to PyO3! There are many ways to support the project, such as:
- help PyO3 users with issues on GitHub and Gitter
- improve documentation
- write features and bugfixes
- publish blogs and examples of how to use PyO3
Our contributing notes and architecture guide have more resources if you wish to volunteer time for PyO3 and are searching where to start.
If you don't have time to contribute yourself but still wish to support the project's future success, some of our maintainers have GitHub sponsorship pages:
License
PyO3 is licensed under the Apache-2.0 license. Python is licensed under the Python License.
Installation
To get started using PyO3 you will need three things: a rust toolchain, a python environment, and a way to build. We'll cover each of these below.
Rust
First, make sure you have rust installed on your system. If you haven't already done so you can do so by following the instructions here. PyO3 runs on both the stable
and nightly
versions so you can choose whichever one fits you best. The minimum required rust version is Rust 1.48.
if you can run rustc --version
and the version is high enough you're good to go!
Python
To use PyO3 you need at least Python 3.7. While you can simply use the default Python version on your system, it is recommended to use a virtual environment.
Virtualenvs
While you can use any virtualenv manager you like, we recommend the use of pyenv
especially if you want to develop or test for multiple different python versions, so that is what the examples in this book will use. The installation instructions for pyenv
can be found here.
Note that when using pyenv
you should also set the following environment variable
PYTHON_CONFIGURE_OPTS="--enable-shared"
Building
There are a number of build and python package management systems such as setuptools-rust
or manually we recommend the use of maturin
which you can install here. It is developed to work with PyO3 and is the most "batteries included" experience. maturin
is just a python package so you can add it in any way that you install python packages.
System Python:
pip install maturin --user
pipx:
pipx install maturin
pyenv:
pyenv activate pyo3
pip install maturin
poetry:
poetry add -D maturin
after installation, you can run maturin --version
to check that you have correctly installed it.
Starting a new project
Firstly you should create the folder and virtual environment that are going to contain your new project. Here we will use the recommended pyenv
:
mkdir pyo3-example
cd pyo3-example
pyenv virtualenv pyo3
pyenv local pyo3
after this, you should install your build manager. In this example, we will use maturin
. After you've activated your virtualenv add maturin
to it:
pip install maturin
After this, you can initialise the new project
maturin init
If maturin
is already installed you can create a new project using that directly as well:
maturin new -b pyo3 pyo3-example
cd pyo3-example
pyenv virtualenv pyo3
pyenv local pyo3
Adding to an existing project
Sadly currently maturin
cannot be run in existing projects, so if you want to use python in an existing project you basically have two options:
- create a new project as above and move your existing code into that project
- Manually edit your project configuration as necessary.
If you are opting for the second option, here are the things you need to pay attention to:
Cargo.toml
Make sure that the rust you want to be able to access from Python is compiled into a library. You can have a binary output as well, but the code you want to access from python has to be in the library. Also, make sure that the crate type is cdylib
and add PyO3 as a dependency as so:
[package]
# Name of the package. If you already have a package defined in `Cargo.toml`, you can remove
# this section.
name = "pyo3_start"
version = "0.1.0"
edition = "2021"
[lib]
# The name of the native library. This is the name which will be used in Python to import the
# library (i.e. `import string_sum`). If you change this, you must also change the name of the
# `#[pymodule]` in `src/lib.rs`.
name = "pyo3_example"
# "cdylib" is necessary to produce a shared library for Python to import from.
crate-type = ["cdylib"]
[dependencies]
pyo3 = { version = "0.17.3", features = ["extension-module"] }
pyproject.toml
You should also create a pyproject.toml
with the following contents:
[build-system]
requires = ["maturin>=0.13,<0.14"]
build-backend = "maturin"
[project]
name = "pyo3_example"
requires-python = ">=3.7"
classifiers = [
"Programming Language :: Rust",
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
]
Running code
After this you can setup rust code to be available in python as below; for example, you can place this code in src/lib.rs
#![allow(unused)] fn main() { use pyo3::prelude::*; /// Formats the sum of two numbers as string. #[pyfunction] fn sum_as_string(a: usize, b: usize) -> PyResult<String> { Ok((a + b).to_string()) } /// A Python module implemented in Rust. The name of this function must match /// the `lib.name` setting in the `Cargo.toml`, else Python will not be able to /// import the module. #[pymodule] fn pyo3_example(_py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(sum_as_string, m)?)?; Ok(()) } }
After this you can run maturin develop
to prepare the python package after which you can use it like so:
$ maturin develop
# lots of progress output as maturin runs the compilation...
$ python
>>> import pyo3_example
>>> pyo3_example.sum_as_string(5, 20)
'25'
For more instructions on how to use python code from rust see the Python from Rust page.
Python modules
You can create a module using #[pymodule]
:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyfunction] fn double(x: usize) -> usize { x * 2 } /// This module is implemented in Rust. #[pymodule] fn my_extension(py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(double, m)?)?; Ok(()) } }
The #[pymodule]
procedural macro takes care of exporting the initialization function of your
module to Python.
The module's name defaults to the name of the Rust function. You can override the module name by
using #[pyo3(name = "custom_name")]
:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyfunction] fn double(x: usize) -> usize { x * 2 } #[pymodule] #[pyo3(name = "custom_name")] fn my_extension(py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(double, m)?)?; Ok(()) } }
The name of the module must match the name of the .so
or .pyd
file. Otherwise, you will get an import error in Python with the following message:
ImportError: dynamic module does not define module export function (PyInit_name_of_your_module)
To import the module, either:
- copy the shared library as described in Manual builds, or
- use a tool, e.g.
maturin develop
with maturin orpython setup.py develop
with setuptools-rust.
Documentation
The Rust doc comments of the module initialization function will be applied automatically as the Python docstring of your module.
For example, building off of the above code, this will print This module is implemented in Rust.
:
import my_extension
print(my_extension.__doc__)
Python submodules
You can create a module hierarchy within a single extension module by using
PyModule.add_submodule()
.
For example, you could define the modules parent_module
and parent_module.child_module
.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pymodule] fn parent_module(py: Python<'_>, m: &PyModule) -> PyResult<()> { register_child_module(py, m)?; Ok(()) } fn register_child_module(py: Python<'_>, parent_module: &PyModule) -> PyResult<()> { let child_module = PyModule::new(py, "child_module")?; child_module.add_function(wrap_pyfunction!(func, child_module)?)?; parent_module.add_submodule(child_module)?; Ok(()) } #[pyfunction] fn func() -> String { "func".to_string() } Python::with_gil(|py| { use pyo3::wrap_pymodule; use pyo3::types::IntoPyDict; let parent_module = wrap_pymodule!(parent_module)(py); let ctx = [("parent_module", parent_module)].into_py_dict(py); py.run("assert parent_module.child_module.func() == 'func'", None, Some(&ctx)).unwrap(); }) }
Note that this does not define a package, so this won’t allow Python code to directly import
submodules by using from parent_module import child_module
. For more information, see
#759 and
#1517.
It is not necessary to add #[pymodule]
on nested modules, which is only required on the top-level module.
Python functions
The #[pyfunction]
attribute is used to define a Python function from a Rust function. Once defined, the function needs to be added to a module using the wrap_pyfunction!
macro.
The following example defines a function called double
in a Python module called my_extension
:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyfunction] fn double(x: usize) -> usize { x * 2 } #[pymodule] fn my_extension(py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(double, m)?)?; Ok(()) } }
This chapter of the guide explains full usage of the #[pyfunction]
attribute. In this first section, the following topics are covered:
There are also additional sections on the following topics:
Function options
The #[pyo3]
attribute can be used to modify properties of the generated Python function. It can take any combination of the following options:
-
Overrides the name exposed to Python.
In the following example, the Rust function
no_args_py
will be added to the Python modulemodule_with_functions
as the Python functionno_args
:#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyfunction] #[pyo3(name = "no_args")] fn no_args_py() -> usize { 42 } #[pymodule] fn module_with_functions(py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(no_args_py, m)?)?; Ok(()) } Python::with_gil(|py| { let m = pyo3::wrap_pymodule!(module_with_functions)(py); assert!(m.getattr(py, "no_args").is_ok()); assert!(m.getattr(py, "no_args_py").is_err()); }); }
-
#[pyo3(text_signature = "...")]
Sets the function signature visible in Python tooling (such as via
inspect.signature
).The example below creates a function
add
which has a signature describing two positional-only argumentsa
andb
.use pyo3::prelude::*; /// This function adds two unsigned 64-bit integers. #[pyfunction] #[pyo3(text_signature = "(a, b, /)")] fn add(a: u64, b: u64) -> u64 { a + b } fn main() -> PyResult<()> { Python::with_gil(|py| { let fun = pyo3::wrap_pyfunction!(add, py)?; let doc: String = fun.getattr("__doc__")?.extract()?; assert_eq!(doc, "This function adds two unsigned 64-bit integers."); let inspect = PyModule::import(py, "inspect")?.getattr("signature")?; let sig: String = inspect .call1((fun,))? .call_method0("__str__")? .extract()?; assert_eq!(sig, "(a, b, /)"); Ok(()) }) }
-
Set this option to make PyO3 pass the containing module as the first argument to the function. It is then possible to use the module in the function body. The first argument must be of type
&PyModule
.The following example creates a function
pyfunction_with_module
which returns the containing module's name (i.e.module_with_fn
):#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyfunction] #[pyo3(pass_module)] fn pyfunction_with_module(module: &PyModule) -> PyResult<&str> { module.name() } #[pymodule] fn module_with_fn(py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(pyfunction_with_module, m)?) } }
Per-argument options
The #[pyo3]
attribute can be used on individual arguments to modify properties of them in the generated function. It can take any combination of the following options:
-
Set this on an option to specify a custom function to convert the function argument from Python to the desired Rust type, instead of using the default
FromPyObject
extraction. The function signature must befn(&PyAny) -> PyResult<T>
whereT
is the Rust type of the argument.The following example uses
from_py_with
to convert the input Python object to its length:#![allow(unused)] fn main() { use pyo3::prelude::*; fn get_length(obj: &PyAny) -> PyResult<usize> { let length = obj.len()?; Ok(length) } #[pyfunction] fn object_length( #[pyo3(from_py_with = "get_length")] argument: usize ) -> usize { argument } Python::with_gil(|py| { let f = pyo3::wrap_pyfunction!(object_length)(py).unwrap(); assert_eq!(f.call1((vec![1, 2, 3],)).unwrap().extract::<usize>().unwrap(), 3); }); }
Advanced function patterns
Making the function signature available to Python (old method)
Alternatively, simply make sure the first line of your docstring is
formatted like in the following example. Please note that the newline after the
--
is mandatory. The /
signifies the end of positional-only arguments.
#[pyo3(text_signature)]
should be preferred, since it will override automatically
generated signatures when those are added in a future version of PyO3.
#![allow(unused)] fn main() { #![allow(dead_code)] use pyo3::prelude::*; /// add(a, b, /) /// -- /// /// This function adds two unsigned 64-bit integers. #[pyfunction] fn add(a: u64, b: u64) -> u64 { a + b } // a function with a signature but without docs. Both blank lines after the `--` are mandatory. /// sub(a, b, /) /// -- /// /// #[pyfunction] fn sub(a: u64, b: u64) -> u64 { a - b } }
When annotated like this, signatures are also correctly displayed in IPython.
>>> pyo3_test.add?
Signature: pyo3_test.add(a, b, /)
Docstring: This function adds two unsigned 64-bit integers.
Type: builtin_function_or_method
Calling Python functions in Rust
You can pass Python def
'd functions and built-in functions to Rust functions PyFunction
corresponds to regular Python functions while PyCFunction
describes built-ins such as
repr()
.
You can also use PyAny::is_callable
to check if you have a callable object. is_callable
will
return true
for functions (including lambdas), methods and objects with a __call__
method.
You can call the object with PyAny::call
with the args as first parameter and the kwargs
(or None
) as second parameter. There are also PyAny::call0
with no args and PyAny::call1
with only positional args.
Calling Rust functions in Python
The ways to convert a Rust function into a Python object vary depending on the function:
- Named functions, e.g.
fn foo()
: add#[pyfunction]
and then usewrap_pyfunction!
to get the correspondingPyCFunction
. - Anonymous functions (or closures), e.g.
foo: fn()
either:- use a
#[pyclass]
struct which stores the function as a field and implement__call__
to call the stored function. - use
PyFunction::new_closure
to create an object directly from the function.
- use a
Accessing the FFI functions
In order to make Rust functions callable from Python, PyO3 generates an extern "C"
function whose exact signature depends on the Rust signature. (PyO3 chooses the optimal
Python argument passing convention.) It then embeds the call to the Rust function inside this
FFI-wrapper function. This wrapper handles extraction of the regular arguments and the keyword
arguments from the input PyObject
s.
The wrap_pyfunction
macro can be used to directly get a PyCFunction
given a
#[pyfunction]
and a PyModule
: wrap_pyfunction!(rust_fun, module)
.
#[pyfn]
shorthand
There is a shorthand to #[pyfunction]
and wrap_pymodule!
: the function can be placed inside the module definition and
annotated with #[pyfn]
. To simplify PyO3, it is expected that #[pyfn]
may be removed in a future release (See #694).
An example of #[pyfn]
is below:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pymodule] fn my_extension(py: Python<'_>, m: &PyModule) -> PyResult<()> { #[pyfn(m)] fn double(x: usize) -> usize { x * 2 } Ok(()) } }
#[pyfn(m)]
is just syntactic sugar for #[pyfunction]
, and takes all the same options
documented in the rest of this chapter. The code above is expanded to the following:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pymodule] fn my_extension(py: Python<'_>, m: &PyModule) -> PyResult<()> { #[pyfunction] fn double(x: usize) -> usize { x * 2 } m.add_function(wrap_pyfunction!(double, m)?)?; Ok(()) } }
Function signatures
The #[pyfunction]
attribute also accepts parameters to control how the generated Python function accepts arguments. Just like in Python, arguments can be positional-only, keyword-only, or accept either. *args
lists and **kwargs
dicts can also be accepted. These parameters also work for #[pymethods]
which will be introduced in the Python Classes section of the guide.
Like Python, by default PyO3 accepts all arguments as either positional or keyword arguments. The extra arguments to #[pyfunction]
modify this behaviour. For example, below is a function that accepts arbitrary keyword arguments (**kwargs
in Python syntax) and returns the number that was passed:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::PyDict; #[pyfunction(kwds="**")] fn num_kwds(kwds: Option<&PyDict>) -> usize { kwds.map_or(0, |dict| dict.len()) } #[pymodule] fn module_with_functions(py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(num_kwds, m)?).unwrap(); Ok(()) } }
The following parameters can be passed to the #[pyfunction]
attribute:
"/"
: positional-only arguments separator, each parameter defined before"/"
is a positional-only parameter. Corresponds to python'sdef meth(arg1, arg2, ..., /, argN..)
."*"
: var arguments separator, each parameter defined after"*"
is a keyword-only parameter. Corresponds to python'sdef meth(*, arg1.., arg2=..)
.args="*"
: "args" is var args, corresponds to Python'sdef meth(*args)
. Type of theargs
parameter has to be&PyTuple
.kwargs="**"
: "kwargs" receives keyword arguments, corresponds to Python'sdef meth(**kwargs)
. The type of thekwargs
parameter has to beOption<&PyDict>
.arg="Value"
: arguments with default value. Corresponds to Python'sdef meth(arg=Value)
. If thearg
argument is defined after var arguments, it is treated as a keyword-only argument. Note thatValue
has to be valid rust code, PyO3 just inserts it into the generated code unmodified.
Example:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::{PyDict, PyTuple}; #[pyclass] struct MyClass { num: i32, } #[pymethods] impl MyClass { #[new] #[args(num = "-1")] fn new(num: i32) -> Self { MyClass { num } } #[args( num = "10", py_args = "*", name = "\"Hello\"", py_kwargs = "**" )] fn method( &mut self, num: i32, name: &str, py_args: &PyTuple, py_kwargs: Option<&PyDict>, ) -> PyResult<String> { self.num = num; Ok(format!( "py_args={:?}, py_kwargs={:?}, name={}, num={}", py_args, py_kwargs, name, self.num )) } fn make_change(&mut self, num: i32) -> PyResult<String> { self.num = num; Ok(format!("num={}", self.num)) } } }
N.B. the position of the "/"
and "*"
arguments (if included) control the system of handling positional and keyword arguments. In Python:
import mymodule
mc = mymodule.MyClass()
print(mc.method(44, False, "World", 666, x=44, y=55))
print(mc.method(num=-1, name="World"))
print(mc.make_change(44, False))
Produces output:
py_args=('World', 666), py_kwargs=Some({'x': 44, 'y': 55}), name=Hello, num=44
py_args=(), py_kwargs=None, name=World, num=-1
num=44
num=-1
Error handling
This chapter contains a little background of error handling in Rust and how PyO3 integrates this with Python exceptions.
This covers enough detail to create a #[pyfunction]
which raises Python exceptions from errors originating in Rust.
There is a later section of the guide on Python exceptions which covers exception types in more detail.
Representing Python exceptions
Rust code uses the generic Result<T, E>
enum to propagate errors. The error type E
is chosen by the code author to describe the possible errors which can happen.
PyO3 has the PyErr
type which represents a Python exception. If a PyO3 API could result in a Python exception being raised, the return type of that API
will be PyResult<T>
, which is an alias for the type Result<T, PyErr>
.
In summary:
- When Python exceptions are raised and caught by PyO3, the exception will stored in the
Err
variant of thePyResult
. - Passing Python exceptions through Rust code then uses all the "normal" techniques such as the
?
operator, withPyErr
as the error type. - Finally, when a
PyResult
crosses from Rust back to Python via PyO3, if the result is anErr
variant the contained exception will be raised.
(There are many great tutorials on Rust error handling and the ?
operator, so this guide will not go into detail on Rust-specific topics.)
Raising an exception from a function
As indicated in the previous section, when a PyResult
containing an Err
crosses from Rust to Python, PyO3 will raise the exception contained within.
Accordingly, to raise an exception from a #[pyfunction]
, change the return type T
to PyResult<T>
. When the function returns an Err
it will raise a Python exception. (Other Result<T, E>
types can be used as long as the error E
has a From
conversion for PyErr
, see implementing a conversion below.)
This also works for functions in #[pymethods]
.
For example, the following check_positive
function raises a ValueError
when the input is negative:
use pyo3::exceptions::PyValueError; use pyo3::prelude::*; #[pyfunction] fn check_positive(x: i32) -> PyResult<()> { if x < 0 { Err(PyValueError::new_err("x is negative")) } else { Ok(()) } } fn main(){ Python::with_gil(|py|{ let fun = pyo3::wrap_pyfunction!(check_positive, py).unwrap(); fun.call1((-1,)).unwrap_err(); fun.call1((1,)).unwrap(); }); }
All built-in Python exception types are defined in the pyo3::exceptions
module. They have a new_err
constructor to directly build a PyErr
, as seen in the example above.
Custom Rust error types
PyO3 will automatically convert a Result<T, E>
returned by a #[pyfunction]
into a PyResult<T>
as long as there is an implementation of std::from::From<E> for PyErr
. Many error types in the Rust standard library have a From
conversion defined in this way.
If the type E
you are handling is defined in a third-party crate, see the section on foreign rust error types below for ways to work with this error.
The following example makes use of the implementation of From<ParseIntError> for PyErr
to raise exceptions encountered when parsing strings as integers:
use pyo3::prelude::*; use std::num::ParseIntError; #[pyfunction] fn parse_int(x: &str) -> Result<usize, ParseIntError> { x.parse() } fn main() { Python::with_gil(|py| { let fun = pyo3::wrap_pyfunction!(parse_int, py).unwrap(); let value: usize = fun.call1(("5",)).unwrap().extract().unwrap(); assert_eq!(value, 5); }); }
When passed a string which doesn't contain a floating-point number, the exception raised will look like the below:
>>> parse_int("bar")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid digit found in string
As a more complete example, the following snippet defines a Rust error named CustomIOError
. It then defines a From<CustomIOError> for PyErr
, which returns a PyErr
representing Python's OSError
. Finally, it
use pyo3::exceptions::PyOSError; use pyo3::prelude::*; use std::fmt; #[derive(Debug)] struct CustomIOError; impl std::error::Error for CustomIOError {} impl fmt::Display for CustomIOError { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "Oh no!") } } impl std::convert::From<CustomIOError> for PyErr { fn from(err: CustomIOError) -> PyErr { PyOSError::new_err(err.to_string()) } } pub struct Connection { /* ... */} fn bind(addr: String) -> Result<Connection, CustomIOError> { if &addr == "0.0.0.0"{ Err(CustomIOError) } else { Ok(Connection{ /* ... */}) } } #[pyfunction] fn connect(s: String) -> Result<(), CustomIOError> { bind(s)?; Ok(()) } fn main() { Python::with_gil(|py| { let fun = pyo3::wrap_pyfunction!(connect, py).unwrap(); let err = fun.call1(("0.0.0.0",)).unwrap_err(); assert!(err.is_instance_of::<PyOSError>(py)); }); }
If lazy construction of the Python exception instance is desired, the
PyErrArguments
trait can be implemented instead of From
. In that case, actual exception argument creation is delayed
until the PyErr
is needed.
A final note is that any errors E
which have a From
conversion can be used with the ?
("try") operator with them. An alternative implementation of the above parse_int
which instead returns PyResult
is below:
use pyo3::prelude::*; fn parse_int(s: String) -> PyResult<usize> { let x = s.parse()?; Ok(x) } use pyo3::exceptions::PyValueError; fn main() { Python::with_gil(|py| { assert_eq!(parse_int(String::from("1")).unwrap(), 1); assert_eq!(parse_int(String::from("1337")).unwrap(), 1337); assert!(parse_int(String::from("-1")) .unwrap_err() .is_instance_of::<PyValueError>(py)); assert!(parse_int(String::from("foo")) .unwrap_err() .is_instance_of::<PyValueError>(py)); assert!(parse_int(String::from("13.37")) .unwrap_err() .is_instance_of::<PyValueError>(py)); }) }
Foreign Rust error types
The Rust compiler will not permit implementation of traits for types outside of the crate where the type is defined. (This is known as the "orphan rule".)
Given a type OtherError
which is defined in third-party code, there are two main strategies available to integrate it with PyO3:
- Create a newtype wrapper, e.g.
MyOtherError
. Then implementFrom<MyOtherError> for PyErr
(orPyErrArguments
), as well asFrom<OtherError>
forMyOtherError
. - Use Rust's Result combinators such as
map_err
to write code freely to convertOtherError
into whatever is needed. This requires boilerplate at every usage however gives unlimited flexibility.
To detail the newtype strategy a little further, the key trick is to return Result<T, MyOtherError>
from the #[pyfunction]
. This means that PyO3 will make use of From<MyOtherError> for PyErr
to create Python exceptions while the #[pyfunction]
implementation can use ?
to convert OtherError
to MyOtherError
automatically.
The following example demonstrates this for some imaginary third-party crate some_crate
with a function get_x
returning Result<i32, OtherError>
:
mod some_crate { pub struct OtherError(()); impl OtherError { pub fn message(&self) -> &'static str { "some error occurred" } } pub fn get_x() -> Result<i32, OtherError> { Ok(5) } } use pyo3::prelude::*; use pyo3::exceptions::PyValueError; use some_crate::{OtherError, get_x}; struct MyOtherError(OtherError); impl From<MyOtherError> for PyErr { fn from(error: MyOtherError) -> Self { PyValueError::new_err(error.0.message()) } } impl From<OtherError> for MyOtherError { fn from(other: OtherError) -> Self { Self(other) } } #[pyfunction] fn wrapped_get_x() -> Result<i32, MyOtherError> { // get_x is a function returning Result<i32, OtherError> let x: i32 = get_x()?; Ok(x) } fn main() { Python::with_gil(|py| { let fun = pyo3::wrap_pyfunction!(wrapped_get_x, py).unwrap(); let value: usize = fun.call0().unwrap().extract().unwrap(); assert_eq!(value, 5); }); }
Python classes
PyO3 exposes a group of attributes powered by Rust's proc macro system for defining Python classes as Rust structs.
The main attribute is #[pyclass]
, which is placed upon a Rust struct
or a fieldless enum
(a.k.a. C-like enum) to generate a Python type for it. They will usually also have one #[pymethods]
-annotated impl
block for the struct, which is used to define Python methods and constants for the generated Python type. (If the multiple-pymethods
feature is enabled each #[pyclass]
is allowed to have multiple #[pymethods]
blocks.) #[pymethods]
may also have implementations for Python magic methods such as __str__
.
This chapter will discuss the functionality and configuration these attributes offer. Below is a list of links to the relevant section of this chapter for each:
Defining a new class
To define a custom Python class, add the #[pyclass]
attribute to a Rust struct or a fieldless enum.
#![allow(unused)] fn main() { #![allow(dead_code)] use pyo3::prelude::*; #[pyclass] struct Integer{ inner: i32 } // A "tuple" struct #[pyclass] struct Number(i32); // PyO3 supports custom discriminants in enums #[pyclass] enum HttpResponse { Ok = 200, NotFound = 404, Teapot = 418, // ... } #[pyclass] enum MyEnum { Variant, OtherVariant = 30, // PyO3 supports custom discriminants. } }
The above example generates implementations for PyTypeInfo
and PyClass
for MyClass
and MyEnum
. To see these generated implementations, refer to the implementation details at the end of this chapter.
Restrictions
To integrate Rust types with Python, PyO3 needs to place some restrictions on the types which can be annotated with #[pyclass]
. In particular, they must have no lifetime parameters, no generic parameters, and must implement Send
. The reason for each of these is explained below.
No lifetime parameters
Rust lifetimes are used by the Rust compiler to reason about a program's memory safety. They are a compile-time only concept; there is no way to access Rust lifetimes at runtime from a dynamic language like Python.
As soon as Rust data is exposed to Python, there is no guarantee which the Rust compiler can make on how long the data will live. Python is a reference-counted language and those references can be held for an arbitrarily long time which is untraceable by the Rust compiler. The only possible way to express this correctly is to require that any #[pyclass]
does not borrow data for any lifetime shorter than the 'static
lifetime, i.e. the #[pyclass]
cannot have any lifetime parameters.
When you need to share ownership of data between Python and Rust, instead of using borrowed references with lifetimes consider using reference-counted smart pointers such as Arc
or Py
.
No generic parameters
A Rust struct Foo<T>
with a generic parameter T
generates new compiled implementations each time it is used with a different concrete type for T
. These new implementations are generated by the compiler at each usage site. This is incompatible with wrapping Foo
in Python, where there needs to be a single compiled implementation of Foo
which is integrated with the Python interpreter.
Must be send
Because Python objects are freely shared between threads by the Python interpreter, there is no guarantee which thread will eventually drop the object. Therefore all types annotated with #[pyclass]
must implement Send
(unless annotated with #[pyclass(unsendable)]
).
Constructor
By default it is not possible to create an instance of a custom class from Python code.
To declare a constructor, you need to define a method and annotate it with the #[new]
attribute. Only Python's __new__
method can be specified, __init__
is not available.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct Number(i32); #[pymethods] impl Number { #[new] fn new(value: i32) -> Self { Number(value) } } }
Alternatively, if your new
method may fail you can return PyResult<Self>
.
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::exceptions::PyValueError; #[pyclass] struct Nonzero(i32); #[pymethods] impl Nonzero { #[new] fn py_new(value: i32) -> PyResult<Self> { if value == 0 { Err(PyValueError::new_err("cannot be zero")) } else { Ok(Nonzero(value)) } } } }
As you can see, the Rust method name is not important here; this way you can
still use new()
for a Rust-level constructor.
If no method marked with #[new]
is declared, object instances can only be
created from Rust, but not from Python.
For arguments, see the Method arguments
section below.
Adding the class to a module
The next step is to create the module initializer and add our class to it
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct Number(i32); #[pymodule] fn my_module(_py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_class::<Number>()?; Ok(()) } }
PyCell and interior mutability
You sometimes need to convert your pyclass
into a Python object and access it
from Rust code (e.g., for testing it).
PyCell
is the primary interface for that.
PyCell<T: PyClass>
is always allocated in the Python heap, so Rust doesn't have ownership of it.
In other words, Rust code can only extract a &PyCell<T>
, not a PyCell<T>
.
Thus, to mutate data behind &PyCell
safely, PyO3 employs the
Interior Mutability Pattern
like RefCell
.
Users who are familiar with RefCell
can use PyCell
just like RefCell
.
For users who are not very familiar with RefCell
, here is a reminder of Rust's rules of borrowing:
- At any given time, you can have either (but not both of) one mutable reference or any number of immutable references.
- References must always be valid.
PyCell
, like RefCell
, ensures these borrowing rules by tracking references at runtime.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass { #[pyo3(get)] num: i32, } Python::with_gil(|py| { let obj = PyCell::new(py, MyClass { num: 3}).unwrap(); { let obj_ref = obj.borrow(); // Get PyRef assert_eq!(obj_ref.num, 3); // You cannot get PyRefMut unless all PyRefs are dropped assert!(obj.try_borrow_mut().is_err()); } { let mut obj_mut = obj.borrow_mut(); // Get PyRefMut obj_mut.num = 5; // You cannot get any other refs until the PyRefMut is dropped assert!(obj.try_borrow().is_err()); assert!(obj.try_borrow_mut().is_err()); } // You can convert `&PyCell` to a Python object pyo3::py_run!(py, obj, "assert obj.num == 5"); }); }
&PyCell<T>
is bounded by the same lifetime as a GILGuard
.
To make the object longer lived (for example, to store it in a struct on the
Rust side), you can use Py<T>
, which stores an object longer than the GIL
lifetime, and therefore needs a Python<'_>
token to access.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass { num: i32, } fn return_myclass() -> Py<MyClass> { Python::with_gil(|py| Py::new(py, MyClass { num: 1 }).unwrap()) } let obj = return_myclass(); Python::with_gil(|py|{ let cell = obj.as_ref(py); // Py<MyClass>::as_ref returns &PyCell<MyClass> let obj_ref = cell.borrow(); // Get PyRef<T> assert_eq!(obj_ref.num, 1); }); }
Customizing the class
#[pyclass]
can be used with the following parameters:
Parameter | Description |
---|---|
crate = "some::path" | Path to import the pyo3 crate, if it's not accessible at ::pyo3 . |
dict | Gives instances of this class an empty __dict__ to store custom attributes. |
extends = BaseType | Use a custom baseclass. Defaults to PyAny |
freelist = N | Implements a free list of size N. This can improve performance for types that are often created and deleted in quick succession. Profile your code to see whether freelist is right for you. |
frozen | Declares that your pyclass is immutable. It removes the borrowchecker overhead when retrieving a shared reference to the Rust struct, but disables the ability to get a mutable reference. |
mapping | Inform PyO3 that this class is a Mapping , and so leave its implementation of sequence C-API slots empty. |
module = "module_name" | Python code will see the class as being defined in this module. Defaults to builtins . |
name = "python_name" | Sets the name that Python sees this class as. Defaults to the name of the Rust struct. |
sequence | Inform PyO3 that this class is a Sequence , and so leave its C-API mapping length slot empty. |
subclass | Allows other Python classes and #[pyclass] to inherit from this class. Enums cannot be subclassed. |
text_signature = "(arg1, arg2, ...)" | Sets the text signature for the Python class' __new__ method. |
unsendable | Required if your struct is not Send . Rather than using unsendable , consider implementing your struct in a threadsafe way by e.g. substituting Rc with Arc . By using unsendable , your class will panic when accessed by another thread. |
weakref | Allows this class to be weakly referenceable. |
All of these parameters can either be passed directly on the #[pyclass(...)]
annotation, or as one or
more accompanying #[pyo3(...)]
annotations, e.g.:
// Argument supplied directly to the `#[pyclass]` annotation.
#[pyclass(name = "SomeName", subclass)]
struct MyClass { }
// Argument supplied as a separate annotation.
#[pyclass]
#[pyo3(name = "SomeName", subclass)]
struct MyClass { }
These parameters are covered in various sections of this guide.
Return type
Generally, #[new]
method have to return T: Into<PyClassInitializer<Self>>
or
PyResult<T> where T: Into<PyClassInitializer<Self>>
.
For constructors that may fail, you should wrap the return type in a PyResult as well. Consult the table below to determine which type your constructor should return:
Cannot fail | May fail | |
---|---|---|
No inheritance | T | PyResult<T> |
Inheritance(T Inherits U) | (T, U) | PyResult<(T, U)> |
Inheritance(General Case) | PyClassInitializer<T> | PyResult<PyClassInitializer<T>> |
Inheritance
By default, PyAny
is used as the base class. To override this default,
use the extends
parameter for pyclass
with the full path to the base class.
For convenience, (T, U)
implements Into<PyClassInitializer<T>>
where U
is the
baseclass of T
.
But for more deeply nested inheritance, you have to return PyClassInitializer<T>
explicitly.
To get a parent class from a child, use PyRef
instead of &self
for methods,
or PyRefMut
instead of &mut self
.
Then you can access a parent class by self_.as_ref()
as &Self::BaseClass
,
or by self_.into_super()
as PyRef<Self::BaseClass>
.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass(subclass)] struct BaseClass { val1: usize, } #[pymethods] impl BaseClass { #[new] fn new() -> Self { BaseClass { val1: 10 } } pub fn method(&self) -> PyResult<usize> { Ok(self.val1) } } #[pyclass(extends=BaseClass, subclass)] struct SubClass { val2: usize, } #[pymethods] impl SubClass { #[new] fn new() -> (Self, BaseClass) { (SubClass { val2: 15 }, BaseClass::new()) } fn method2(self_: PyRef<'_, Self>) -> PyResult<usize> { let super_ = self_.as_ref(); // Get &BaseClass super_.method().map(|x| x * self_.val2) } } #[pyclass(extends=SubClass)] struct SubSubClass { val3: usize, } #[pymethods] impl SubSubClass { #[new] fn new() -> PyClassInitializer<Self> { PyClassInitializer::from(SubClass::new()) .add_subclass(SubSubClass{val3: 20}) } fn method3(self_: PyRef<'_, Self>) -> PyResult<usize> { let v = self_.val3; let super_ = self_.into_super(); // Get PyRef<'_, SubClass> SubClass::method2(super_).map(|x| x * v) } } Python::with_gil(|py| { let subsub = pyo3::PyCell::new(py, SubSubClass::new()).unwrap(); pyo3::py_run!(py, subsub, "assert subsub.method3() == 3000") }); }
You can also inherit native types such as PyDict
, if they implement
PySizedLayout
. However, this is not supported when building for the Python limited API (aka the abi3
feature of PyO3).
However, because of some technical problems, we don't currently provide safe upcasting methods for types that inherit native types. Even in such cases, you can unsafely get a base class by raw pointer conversion.
#![allow(unused)] fn main() { #[cfg(not(Py_LIMITED_API))] { use pyo3::prelude::*; use pyo3::types::PyDict; use pyo3::AsPyPointer; use std::collections::HashMap; #[pyclass(extends=PyDict)] #[derive(Default)] struct DictWithCounter { counter: HashMap<String, usize>, } #[pymethods] impl DictWithCounter { #[new] fn new() -> Self { Self::default() } fn set(mut self_: PyRefMut<'_, Self>, key: String, value: &PyAny) -> PyResult<()> { self_.counter.entry(key.clone()).or_insert(0); let py = self_.py(); let dict: &PyDict = unsafe { py.from_borrowed_ptr_or_err(self_.as_ptr())? }; dict.set_item(key, value) } } Python::with_gil(|py| { let cnt = pyo3::PyCell::new(py, DictWithCounter::new()).unwrap(); pyo3::py_run!(py, cnt, "cnt.set('abc', 10); assert cnt['abc'] == 10") }); } }
If SubClass
does not provide a baseclass initialization, the compilation fails.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct BaseClass { val1: usize, } #[pyclass(extends=BaseClass)] struct SubClass { val2: usize, } #[pymethods] impl SubClass { #[new] fn new() -> Self { SubClass { val2: 15 } } } }
Object properties
PyO3 supports two ways to add properties to your #[pyclass]
:
- For simple struct fields with no side effects, a
#[pyo3(get, set)]
attribute can be added directly to the field definition in the#[pyclass]
. - For properties which require computation you can define
#[getter]
and#[setter]
functions in the#[pymethods]
block.
We'll cover each of these in the following sections.
Object properties using #[pyo3(get, set)]
For simple cases where a member variable is just read and written with no side effects, you can declare getters and setters in your #[pyclass]
field definition using the pyo3
attribute, like in the example below:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass { #[pyo3(get, set)] num: i32 } }
The above would make the num
field available for reading and writing as a self.num
Python property. To expose the property with a different name to the field, specify this alongside the rest of the options, e.g. #[pyo3(get, set, name = "custom_name")]
.
Properties can be readonly or writeonly by using just #[pyo3(get)]
or #[pyo3(set)]
respectively.
To use these annotations, your field type must implement some conversion traits:
- For
get
the field type must implement bothIntoPy<PyObject>
andClone
. - For
set
the field type must implementFromPyObject
.
Object properties using #[getter]
and #[setter]
For cases which don't satisfy the #[pyo3(get, set)]
trait requirements, or need side effects, descriptor methods can be defined in a #[pymethods]
impl
block.
This is done using the #[getter]
and #[setter]
attributes, like in the example below:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass { num: i32, } #[pymethods] impl MyClass { #[getter] fn num(&self) -> PyResult<i32> { Ok(self.num) } } }
A getter or setter's function name is used as the property name by default. There are several ways how to override the name.
If a function name starts with get_
or set_
for getter or setter respectively,
the descriptor name becomes the function name with this prefix removed. This is also useful in case of
Rust keywords like type
(raw identifiers
can be used since Rust 2018).
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass { num: i32, } #[pymethods] impl MyClass { #[getter] fn get_num(&self) -> PyResult<i32> { Ok(self.num) } #[setter] fn set_num(&mut self, value: i32) -> PyResult<()> { self.num = value; Ok(()) } } }
In this case, a property num
is defined and available from Python code as self.num
.
Both the #[getter]
and #[setter]
attributes accept one parameter.
If this parameter is specified, it is used as the property name, i.e.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass { num: i32, } #[pymethods] impl MyClass { #[getter(number)] fn num(&self) -> PyResult<i32> { Ok(self.num) } #[setter(number)] fn set_num(&mut self, value: i32) -> PyResult<()> { self.num = value; Ok(()) } } }
In this case, the property number
is defined and available from Python code as self.number
.
Attributes defined by #[setter]
or #[pyo3(set)]
will always raise AttributeError
on del
operations. Support for defining custom del
behavior is tracked in
#1778.
Instance methods
To define a Python compatible method, an impl
block for your struct has to be annotated with the
#[pymethods]
attribute. PyO3 generates Python compatible wrappers for all functions in this
block with some variations, like descriptors, class method static methods, etc.
Since Rust allows any number of impl
blocks, you can easily split methods
between those accessible to Python (and Rust) and those accessible only to Rust. However to have multiple
#[pymethods]
-annotated impl
blocks for the same struct you must enable the multiple-pymethods
feature of PyO3.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass { num: i32, } #[pymethods] impl MyClass { fn method1(&self) -> PyResult<i32> { Ok(10) } fn set_method(&mut self, value: i32) -> PyResult<()> { self.num = value; Ok(()) } } }
Calls to these methods are protected by the GIL, so both &self
and &mut self
can be used.
The return type must be PyResult<T>
or T
for some T
that implements IntoPy<PyObject>
;
the latter is allowed if the method cannot raise Python exceptions.
A Python
parameter can be specified as part of method signature, in this case the py
argument
gets injected by the method wrapper, e.g.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass { #[allow(dead_code)] num: i32, } #[pymethods] impl MyClass { fn method2(&self, py: Python<'_>) -> PyResult<i32> { Ok(10) } } }
From the Python perspective, the method2
in this example does not accept any arguments.
Class methods
To create a class method for a custom class, the method needs to be annotated
with the #[classmethod]
attribute.
This is the equivalent of the Python decorator @classmethod
.
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::PyType; #[pyclass] struct MyClass { #[allow(dead_code)] num: i32, } #[pymethods] impl MyClass { #[classmethod] fn cls_method(cls: &PyType) -> PyResult<i32> { Ok(10) } } }
Declares a class method callable from Python.
- The first parameter is the type object of the class on which the method is called. This may be the type object of a derived class.
- The first parameter implicitly has type
&PyType
. - For details on
parameter-list
, see the documentation ofMethod arguments
section. - The return type must be
PyResult<T>
orT
for someT
that implementsIntoPy<PyObject>
.
Static methods
To create a static method for a custom class, the method needs to be annotated with the
#[staticmethod]
attribute. The return type must be T
or PyResult<T>
for some T
that implements
IntoPy<PyObject>
.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass { #[allow(dead_code)] num: i32, } #[pymethods] impl MyClass { #[staticmethod] fn static_method(param1: i32, param2: &str) -> PyResult<i32> { Ok(10) } } }
Class attributes
To create a class attribute (also called class variable), a method without
any arguments can be annotated with the #[classattr]
attribute.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass {} #[pymethods] impl MyClass { #[classattr] fn my_attribute() -> String { "hello".to_string() } } Python::with_gil(|py| { let my_class = py.get_type::<MyClass>(); pyo3::py_run!(py, my_class, "assert my_class.my_attribute == 'hello'") }); }
Note: if the method has a
Result
return type and returns anErr
, PyO3 will panic during class creation.
If the class attribute is defined with const
code only, one can also annotate associated
constants:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass {} #[pymethods] impl MyClass { #[classattr] const MY_CONST_ATTRIBUTE: &'static str = "foobar"; } }
Method arguments
Similar to #[pyfunction]
, the #[args]
attribute can be used to specify the way that #[pymethods]
accept arguments. Consult the documentation for function signatures
to see the parameters this attribute accepts.
The following example defines a class MyClass
with a method method
. This method has an #[args]
attribute which sets default values for num
and name
, and indicates that py_args
should collect all extra positional arguments and py_kwargs
all extra keyword arguments:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::{PyDict, PyTuple}; #[pyclass] struct MyClass { num: i32, } #[pymethods] impl MyClass { #[new] #[args(num = "-1")] fn new(num: i32) -> Self { MyClass { num } } #[args( num = "10", py_args = "*", name = "\"Hello\"", py_kwargs = "**" )] fn method( &mut self, num: i32, name: &str, py_args: &PyTuple, py_kwargs: Option<&PyDict>, ) -> String { let num_before = self.num; self.num = num; format!( "py_args={:?}, py_kwargs={:?}, name={}, num={} num_before={}", py_args, py_kwargs, name, self.num, num_before, ) } } }
In Python this might be used like:
>>> import mymodule
>>> mc = mymodule.MyClass()
>>> print(mc.method(44, False, "World", 666, x=44, y=55))
py_args=('World', 666), py_kwargs=Some({'x': 44, 'y': 55}), name=Hello, num=44, num_before=-1
>>> print(mc.method(num=-1, name="World"))
py_args=(), py_kwargs=None, name=World, num=-1, num_before=44
Making class method signatures available to Python
The text_signature = "..."
option for #[pyfunction]
also works for classes and methods:
#![allow(dead_code)] use pyo3::prelude::*; use pyo3::types::PyType; // it works even if the item is not documented: #[pyclass(text_signature = "(c, d, /)")] struct MyClass {} #[pymethods] impl MyClass { // the signature for the constructor is attached // to the struct definition instead. #[new] fn new(c: i32, d: &str) -> Self { Self {} } // the self argument should be written $self #[pyo3(text_signature = "($self, e, f)")] fn my_method(&self, e: i32, f: i32) -> i32 { e + f } #[classmethod] #[pyo3(text_signature = "(cls, e, f)")] fn my_class_method(cls: &PyType, e: i32, f: i32) -> i32 { e + f } #[staticmethod] #[pyo3(text_signature = "(e, f)")] fn my_static_method(e: i32, f: i32) -> i32 { e + f } } fn main() -> PyResult<()> { Python::with_gil(|py| { let inspect = PyModule::import(py, "inspect")?.getattr("signature")?; let module = PyModule::new(py, "my_module")?; module.add_class::<MyClass>()?; let class = module.getattr("MyClass")?; if cfg!(not(Py_LIMITED_API)) || py.version_info() >= (3, 10) { let doc: String = class.getattr("__doc__")?.extract()?; assert_eq!(doc, ""); let sig: String = inspect .call1((class,))? .call_method0("__str__")? .extract()?; assert_eq!(sig, "(c, d, /)"); } else { let doc: String = class.getattr("__doc__")?.extract()?; assert_eq!(doc, ""); inspect.call1((class,)).expect_err("`text_signature` on classes is not compatible with compilation in `abi3` mode until Python 3.10 or greater"); } { let method = class.getattr("my_method")?; assert!(method.getattr("__doc__")?.is_none()); let sig: String = inspect .call1((method,))? .call_method0("__str__")? .extract()?; assert_eq!(sig, "(self, /, e, f)"); } { let method = class.getattr("my_class_method")?; assert!(method.getattr("__doc__")?.is_none()); let sig: String = inspect .call1((method,))? .call_method0("__str__")? .extract()?; assert_eq!(sig, "(cls, e, f)"); } { let method = class.getattr("my_static_method")?; assert!(method.getattr("__doc__")?.is_none()); let sig: String = inspect .call1((method,))? .call_method0("__str__")? .extract()?; assert_eq!(sig, "(e, f)"); } Ok(()) }) }
Note that text_signature
on classes is not compatible with compilation in
abi3
mode until Python 3.10 or greater.
#[pyclass] enums
Currently PyO3 only supports fieldless enums. PyO3 adds a class attribute for each variant, so you can access them in Python without defining #[new]
. PyO3 also provides default implementations of __richcmp__
and __int__
, so they can be compared using ==
:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] enum MyEnum { Variant, OtherVariant, } Python::with_gil(|py| { let x = Py::new(py, MyEnum::Variant).unwrap(); let y = Py::new(py, MyEnum::OtherVariant).unwrap(); let cls = py.get_type::<MyEnum>(); pyo3::py_run!(py, x y cls, r#" assert x == cls.Variant assert y == cls.OtherVariant assert x != y "#) }) }
You can also convert your enums into int
:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] enum MyEnum { Variant, OtherVariant = 10, } Python::with_gil(|py| { let cls = py.get_type::<MyEnum>(); let x = MyEnum::Variant as i32; // The exact value is assigned by the compiler. pyo3::py_run!(py, cls x, r#" assert int(cls.Variant) == x assert int(cls.OtherVariant) == 10 assert cls.OtherVariant == 10 # You can also compare against int. assert 10 == cls.OtherVariant "#) }) }
PyO3 also provides __repr__
for enums:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] enum MyEnum{ Variant, OtherVariant, } Python::with_gil(|py| { let cls = py.get_type::<MyEnum>(); let x = Py::new(py, MyEnum::Variant).unwrap(); pyo3::py_run!(py, cls x, r#" assert repr(x) == 'MyEnum.Variant' assert repr(cls.OtherVariant) == 'MyEnum.OtherVariant' "#) }) }
All methods defined by PyO3 can be overridden. For example here's how you override __repr__
:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] enum MyEnum { Answer = 42, } #[pymethods] impl MyEnum { fn __repr__(&self) -> &'static str { "42" } } Python::with_gil(|py| { let cls = py.get_type::<MyEnum>(); pyo3::py_run!(py, cls, "assert repr(cls.Answer) == '42'") }) }
Enums and their variants can also be renamed using #[pyo3(name)]
.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass(name = "RenamedEnum")] enum MyEnum { #[pyo3(name = "UPPERCASE")] Variant, } Python::with_gil(|py| { let x = Py::new(py, MyEnum::Variant).unwrap(); let cls = py.get_type::<MyEnum>(); pyo3::py_run!(py, x cls, r#" assert repr(x) == 'RenamedEnum.UPPERCASE' assert x == cls.UPPERCASE "#) }) }
You may not use enums as a base class or let enums inherit from other classes.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass(subclass)] enum BadBase{ Var1, } }
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass(subclass)] struct Base; #[pyclass(extends=Base)] enum BadSubclass{ Var1, } }
#[pyclass]
enums are currently not interoperable with IntEnum
in Python.
Implementation details
The #[pyclass]
macros rely on a lot of conditional code generation: each #[pyclass]
can optionally have a #[pymethods]
block.
To support this flexibility the #[pyclass]
macro expands to a blob of boilerplate code which sets up the structure for "dtolnay specialization". This implementation pattern enables the Rust compiler to use #[pymethods]
implementations when they are present, and fall back to default (empty) definitions when they are not.
This simple technique works for the case when there is zero or one implementations. To support multiple #[pymethods]
for a #[pyclass]
(in the multiple-pymethods
feature), a registry mechanism provided by the inventory
crate is used instead. This collects impl
s at library load time, but isn't supported on all platforms. See inventory: how it works for more details.
The #[pyclass]
macro expands to roughly the code seen below. The PyClassImplCollector
is the type used internally by PyO3 for dtolnay specialization:
#![allow(unused)] fn main() { #[cfg(not(any(feature = "multiple-pymethods", feature = "pyproto")))] { use pyo3::prelude::*; // Note: the implementation differs slightly with the `pyproto` or `multiple-pymethods` features enabled. struct MyClass { #[allow(dead_code)] num: i32, } unsafe impl pyo3::type_object::PyTypeInfo for MyClass { type AsRefTarget = pyo3::PyCell<Self>; const NAME: &'static str = "MyClass"; const MODULE: ::std::option::Option<&'static str> = ::std::option::Option::None; #[inline] fn type_object_raw(py: pyo3::Python<'_>) -> *mut pyo3::ffi::PyTypeObject { use pyo3::type_object::LazyStaticType; static TYPE_OBJECT: LazyStaticType = LazyStaticType::new(); TYPE_OBJECT.get_or_init::<Self>(py) } } impl pyo3::PyClass for MyClass { type Frozen = pyo3::pyclass::boolean_struct::False; } impl<'a, 'py> pyo3::impl_::extract_argument::PyFunctionArgument<'a, 'py> for &'a MyClass { type Holder = ::std::option::Option<pyo3::PyRef<'py, MyClass>>; #[inline] fn extract(obj: &'py pyo3::PyAny, holder: &'a mut Self::Holder) -> pyo3::PyResult<Self> { pyo3::impl_::extract_argument::extract_pyclass_ref(obj, holder) } } impl<'a, 'py> pyo3::impl_::extract_argument::PyFunctionArgument<'a, 'py> for &'a mut MyClass { type Holder = ::std::option::Option<pyo3::PyRefMut<'py, MyClass>>; #[inline] fn extract(obj: &'py pyo3::PyAny, holder: &'a mut Self::Holder) -> pyo3::PyResult<Self> { pyo3::impl_::extract_argument::extract_pyclass_ref_mut(obj, holder) } } impl pyo3::IntoPy<PyObject> for MyClass { fn into_py(self, py: pyo3::Python<'_>) -> pyo3::PyObject { pyo3::IntoPy::into_py(pyo3::Py::new(py, self).unwrap(), py) } } impl pyo3::impl_::pyclass::PyClassImpl for MyClass { const DOC: &'static str = "Class for demonstration\u{0}"; const IS_BASETYPE: bool = false; const IS_SUBCLASS: bool = false; type Layout = PyCell<MyClass>; type BaseType = PyAny; type ThreadChecker = pyo3::impl_::pyclass::ThreadCheckerStub<MyClass>; type PyClassMutability = <<pyo3::PyAny as pyo3::impl_::pyclass::PyClassBaseType>::PyClassMutability as pyo3::impl_::pycell::PyClassMutability>::MutableChild; type Dict = pyo3::impl_::pyclass::PyClassDummySlot; type WeakRef = pyo3::impl_::pyclass::PyClassDummySlot; type BaseNativeType = pyo3::PyAny; fn items_iter() -> pyo3::impl_::pyclass::PyClassItemsIter { use pyo3::impl_::pyclass::*; let collector = PyClassImplCollector::<MyClass>::new(); static INTRINSIC_ITEMS: PyClassItems = PyClassItems { slots: &[], methods: &[] }; PyClassItemsIter::new(&INTRINSIC_ITEMS, collector.py_methods()) } } Python::with_gil(|py| { let cls = py.get_type::<MyClass>(); pyo3::py_run!(py, cls, "assert cls.__name__ == 'MyClass'") }); } }
Magic methods and slots
Python's object model defines several protocols for different object behavior, such as the sequence, mapping, and number protocols. You may be familiar with implementing these protocols in Python classes by "magic" methods, such as __str__
or __repr__
. Because of the double-underscores surrounding their name, these are also known as "dunder" methods.
In the Python C-API which PyO3 is implemented upon, many of these magic methods have to be placed into special "slots" on the class type object, as covered in the previous section. There are two ways in which this can be done:
- In
#[pymethods]
, if the name of the method is a recognised magic method, PyO3 will place it in the type object automatically. - [Deprecated since PyO3 0.16] In special traits combined with the
#[pyproto]
attribute.
(There are also many magic methods which don't have a special slot, such as __dir__
. These methods can be implemented as normal in #[pymethods]
.)
If a function name in #[pymethods]
is a recognised magic method, it will be automatically placed into the correct slot in the Python type object. The function name is taken from the usual rules for naming #[pymethods]
: the #[pyo3(name = "...")]
attribute is used if present, otherwise the Rust function name is used.
The magic methods handled by PyO3 are very similar to the standard Python ones on this page - in particular they are the the subset which have slots as defined here. Some of the slots do not have a magic method in Python, which leads to a few additional magic methods defined only in PyO3:
- Magic methods for garbage collection
- Magic methods for the buffer protocol
When PyO3 handles a magic method, a couple of changes apply compared to other #[pymethods]
:
- The
#[pyo3(text_signature = "...")]
attribute is not allowed - The signature is restricted to match the magic method
The following sections list of all magic methods PyO3 currently handles. The given signatures should be interpreted as follows:
- All methods take a receiver as first argument, shown as
<self>
. It can be&self
,&mut self
or aPyCell
reference likeself_: PyRef<'_, Self>
andself_: PyRefMut<'_, Self>
, as described here. - An optional
Python<'py>
argument is always allowed as the first argument. - Return values can be optionally wrapped in
PyResult
. object
means that any type is allowed that can be extracted from a Python object (if argument) or converted to a Python object (if return value).- Other types must match what's given, e.g.
pyo3::basic::CompareOp
for__richcmp__
's second argument. - For the comparison and arithmetic methods, extraction errors are not
propagated as exceptions, but lead to a return of
NotImplemented
. - For some magic methods, the return values are not restricted by PyO3, but
checked by the Python interpreter. For example,
__str__
needs to return a string object. This is indicated byobject (Python type)
.
Basic object customization
-
__str__(<self>) -> object (str)
-
__repr__(<self>) -> object (str)
-
__hash__(<self>) -> isize
Objects that compare equal must have the same hash value. Any type up to 64 bits may be returned instead of
isize
, PyO3 will convert to an isize automatically (wrapping unsigned types likeu64
andusize
).Disabling Python's default hash
By default, all `#[pyclass]` types have a default hash implementation from Python. Types which should not be hashable can override this by setting `__hash__` to `None`. This is the same mechanism as for a pure-Python class. This is done like so:#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct NotHashable { } #[pymethods] impl NotHashable { #[classattr] const __hash__: Option<PyObject> = None; } }
-
__richcmp__(<self>, object, pyo3::basic::CompareOp) -> object
Overloads Python comparison operations (
==
,!=
,<
,<=
,>
, and>=
). TheCompareOp
argument indicates the comparison operation being performed.Note that implementing
__richcmp__
will cause Python not to generate a default__hash__
implementation, so consider implementing__hash__
when implementing__richcmp__
.Return type
The return type will normally be `PyResult`, but any Python object can be returned. If the second argument `object` is not of the type specified in the signature, the generated code will automatically `return NotImplemented`. You can use
CompareOp::matches
to adapt a Ruststd::cmp::Ordering
result to the requested comparison. -
__getattr__(<self>, object) -> object
-
__getattribute__(<self>, object) -> object
Differences between `__getattr__` and `__getattribute__`
As in Python, `__getattr__` is only called if the attribute is not found by normal attribute lookup. `__getattribute__`, on the other hand, is called for *every* attribute access. If it wants to access existing attributes on `self`, it needs to be very careful not to introduce infinite recursion, and use `baseclass.__getattribute__()`. -
__setattr__(<self>, value: object) -> ()
-
__delattr__(<self>, object) -> ()
Overrides attribute access.
-
__bool__(<self>) -> bool
Determines the "truthyness" of an object.
-
__call__(<self>, ...) -> object
- here, any argument list can be defined as for normalpymethods
Iterable objects
Iterators can be defined using these methods:
__iter__(<self>) -> object
__next__(<self>) -> Option<object> or IterNextOutput
(see details)
Returning None
from __next__
indicates that that there are no further items.
Example:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyIterator { iter: Box<dyn Iterator<Item = PyObject> + Send>, } #[pymethods] impl MyIterator { fn __iter__(slf: PyRef<'_, Self>) -> PyRef<'_, Self> { slf } fn __next__(mut slf: PyRefMut<'_, Self>) -> Option<PyObject> { slf.iter.next() } } }
In many cases you'll have a distinction between the type being iterated over
(i.e. the iterable) and the iterator it provides. In this case, the iterable
only needs to implement __iter__()
while the iterator must implement both
__iter__()
and __next__()
. For example:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct Iter { inner: std::vec::IntoIter<usize>, } #[pymethods] impl Iter { fn __iter__(slf: PyRef<'_, Self>) -> PyRef<'_, Self> { slf } fn __next__(mut slf: PyRefMut<'_, Self>) -> Option<usize> { slf.inner.next() } } #[pyclass] struct Container { iter: Vec<usize>, } #[pymethods] impl Container { fn __iter__(slf: PyRef<'_, Self>) -> PyResult<Py<Iter>> { let iter = Iter { inner: slf.iter.clone().into_iter(), }; Py::new(slf.py(), iter) } } Python::with_gil(|py| { let container = Container { iter: vec![1, 2, 3, 4] }; let inst = pyo3::PyCell::new(py, container).unwrap(); pyo3::py_run!(py, inst, "assert list(inst) == [1, 2, 3, 4]"); pyo3::py_run!(py, inst, "assert list(iter(iter(inst))) == [1, 2, 3, 4]"); }); }
For more details on Python's iteration protocols, check out the "Iterator Types" section of the library documentation.
Returning a value from iteration
This guide has so far shown how to use Option<T>
to implement yielding values
during iteration. In Python a generator can also return a value. To express
this in Rust, PyO3 provides the IterNextOutput
enum to both Yield
values
and Return
a final value - see its docs for further details and an example.
Awaitable objects
__await__(<self>) -> object
__aiter__(<self>) -> object
__anext__(<self>) -> Option<object> or IterANextOutput
Mapping & Sequence types
The magic methods in this section can be used to implement Python container types. They are two main categories of container in Python: "mappings" such as dict
, with arbitrary keys, and "sequences" such as list
and tuple
, with integer keys.
The Python C-API which PyO3 is built upon has separate "slots" for sequences and mappings. When writing a class
in pure Python, there is no such distinction in the implementation - a __getitem__
implementation will fill the slots for both the mapping and sequence forms, for example.
By default PyO3 reproduces the Python behaviour of filling both mapping and sequence slots. This makes sense for the "simple" case which matches Python, and also for sequences, where the mapping slot is used anyway to implement slice indexing.
Mapping types usually will not want the sequence slots filled. Having them filled will lead to outcomes which may be unwanted, such as:
- The mapping type will successfully cast to
PySequence
. This may lead to consumers of the type handling it incorrectly. - Python provides a default implementation of
__iter__
for sequences, which calls__getitem__
with consecutive positive integers starting from 0 until anIndexError
is returned. Unless the mapping only contains consecutive positive integer keys, this__iter__
implementation will likely not be the intended behavior.
Use the #[pyclass(mapping)]
annotation to instruct PyO3 to only fill the mapping slots, leaving the sequence ones empty. This will apply to __getitem__
, __setitem__
, and __delitem__
.
Use the #[pyclass(sequence)]
annotation to instruct PyO3 to fill the sq_length
slot instead of the mp_length
slot for __len__
. This will help libraries such as numpy
recognise the class as a sequence, however will also cause CPython to automatically add the sequence length to any negative indices before passing them to __getitem__
. (__getitem__
, __setitem__
and __delitem__
mapping slots are still used for sequences, for slice operations.)
-
__len__(<self>) -> usize
Implements the built-in function
len()
. -
__contains__(<self>, object) -> bool
Implements membership test operators. Should return true if
item
is inself
, false otherwise. For objects that don’t define__contains__()
, the membership test simply traverses the sequence until it finds a match.Disabling Python's default contains
By default, all
#[pyclass]
types with an__iter__
method support a default implementation of thein
operator. Types which do not want this can override this by setting__contains__
toNone
. This is the same mechanism as for a pure-Python class. This is done like so:#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct NoContains { } #[pymethods] impl NoContains { #[classattr] const __contains__: Option<PyObject> = None; } }
-
__getitem__(<self>, object) -> object
Implements retrieval of the
self[a]
element.Note: Negative integer indexes are not handled specially.
-
__setitem__(<self>, object, object) -> ()
Implements assignment to the
self[a]
element. Should only be implemented if elements can be replaced. -
__delitem__(<self>, object) -> ()
Implements deletion of the
self[a]
element. Should only be implemented if elements can be deleted.
-
fn __concat__(&self, other: impl FromPyObject) -> PyResult<impl ToPyObject>
Concatenates two sequences. Used by the
+
operator, after trying the numeric addition via the__add__
and__radd__
methods. -
fn __repeat__(&self, count: isize) -> PyResult<impl ToPyObject>
Repeats the sequence
count
times. Used by the*
operator, after trying the numeric multiplication via the__mul__
and__rmul__
methods. -
fn __inplace_concat__(&self, other: impl FromPyObject) -> PyResult<impl ToPyObject>
Concatenates two sequences. Used by the
+=
operator, after trying the numeric addition via the__iadd__
method. -
fn __inplace_repeat__(&self, count: isize) -> PyResult<impl ToPyObject>
Concatenates two sequences. Used by the
*=
operator, after trying the numeric multiplication via the__imul__
method.
Descriptors
__get__(<self>, object, object) -> object
__set__(<self>, object, object) -> ()
__delete__(<self>, object) -> ()
Numeric types
Binary arithmetic operations (+
, -
, *
, @
, /
, //
, %
, divmod()
,
pow()
and **
, <<
, >>
, &
, ^
, and |
) and their reflected versions:
(If the object
is not of the type specified in the signature, the generated code
will automatically return NotImplemented
.)
__add__(<self>, object) -> object
__radd__(<self>, object) -> object
__sub__(<self>, object) -> object
__rsub__(<self>, object) -> object
__mul__(<self>, object) -> object
__rmul__(<self>, object) -> object
__matmul__(<self>, object) -> object
__rmatmul__(<self>, object) -> object
__floordiv__(<self>, object) -> object
__rfloordiv__(<self>, object) -> object
__truediv__(<self>, object) -> object
__rtruediv__(<self>, object) -> object
__divmod__(<self>, object) -> object
__rdivmod__(<self>, object) -> object
__mod__(<self>, object) -> object
__rmod__(<self>, object) -> object
__lshift__(<self>, object) -> object
__rlshift__(<self>, object) -> object
__rshift__(<self>, object) -> object
__rrshift__(<self>, object) -> object
__and__(<self>, object) -> object
__rand__(<self>, object) -> object
__xor__(<self>, object) -> object
__rxor__(<self>, object) -> object
__or__(<self>, object) -> object
__ror__(<self>, object) -> object
__pow__(<self>, object, object) -> object
__rpow__(<self>, object, object) -> object
In-place assignment operations (+=
, -=
, *=
, @=
, /=
, //=
, %=
,
**=
, <<=
, >>=
, &=
, ^=
, |=
):
__iadd__(<self>, object) -> ()
__isub__(<self>, object) -> ()
__imul__(<self>, object) -> ()
__imatmul__(<self>, object) -> ()
__itruediv__(<self>, object) -> ()
__ifloordiv__(<self>, object) -> ()
__imod__(<self>, object) -> ()
__ipow__(<self>, object, object) -> ()
__ilshift__(<self>, object) -> ()
__irshift__(<self>, object) -> ()
__iand__(<self>, object) -> ()
__ixor__(<self>, object) -> ()
__ior__(<self>, object) -> ()
Unary operations (-
, +
, abs()
and ~
):
__pos__(<self>) -> object
__neg__(<self>) -> object
__abs__(<self>) -> object
__invert__(<self>) -> object
Coercions:
__index__(<self>) -> object (int)
__int__(<self>) -> object (int)
__float__(<self>) -> object (float)
Buffer objects
__getbuffer__(<self>, *mut ffi::Py_buffer, flags) -> ()
__releasebuffer__(<self>, *mut ffi::Py_buffer)
(no return value, not evenPyResult
)
Garbage Collector Integration
If your type owns references to other Python objects, you will need to integrate
with Python's garbage collector so that the GC is aware of those references. To
do this, implement the two methods __traverse__
and __clear__
. These
correspond to the slots tp_traverse
and tp_clear
in the Python C API.
__traverse__
must call visit.call()
for each reference to another Python
object. __clear__
must clear out any mutable references to other Python
objects (thus breaking reference cycles). Immutable references do not have to be
cleared, as every cycle must contain at least one mutable reference.
__traverse__(<self>, pyo3::class::gc::PyVisit<'_>) -> Result<(), pyo3::class::gc::PyTraverseError>
__clear__(<self>) -> ()
Example:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::PyTraverseError; use pyo3::gc::PyVisit; #[pyclass] struct ClassWithGCSupport { obj: Option<PyObject>, } #[pymethods] impl ClassWithGCSupport { fn __traverse__(&self, visit: PyVisit<'_>) -> Result<(), PyTraverseError> { if let Some(obj) = &self.obj { visit.call(obj)? } Ok(()) } fn __clear__(&mut self) { // Clear reference, this decrements ref counter. self.obj = None; } } }
#[pyproto]
traits
PyO3 can use the #[pyproto]
attribute in combination with special traits to implement the magic methods which need slots. The special traits are listed below. See also the documentation for the pyo3::class
module.
Due to complexity in the implementation and usage, these traits were deprecated in PyO3 0.16 in favour of the #[pymethods]
solution.
All #[pyproto]
methods can return T
instead of PyResult<T>
if the method implementation is infallible. In addition, if the return type is ()
, it can be omitted altogether.
Basic object customization
The PyObjectProtocol
trait provides several basic customizations.
fn __str__(&self) -> PyResult<impl ToPyObject<ObjectType=PyString>>
fn __repr__(&self) -> PyResult<impl ToPyObject<ObjectType=PyString>>
fn __hash__(&self) -> PyResult<impl PrimInt>
fn __richcmp__(&self, other: impl FromPyObject, op: CompareOp) -> PyResult<impl ToPyObject>
fn __getattr__(&self, name: impl FromPyObject) -> PyResult<impl IntoPy<PyObject>>
fn __setattr__(&mut self, name: impl FromPyObject, value: impl FromPyObject) -> PyResult<()>
fn __delattr__(&mut self, name: impl FromPyObject) -> PyResult<()>
fn __bool__(&self) -> PyResult<bool>
Emulating numeric types
The PyNumberProtocol
trait can be implemented to emulate numeric types.
fn __add__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __sub__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __mul__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __matmul__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __truediv__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __floordiv__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __mod__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __divmod__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __pow__(lhs: impl FromPyObject, rhs: impl FromPyObject, modulo: Option<impl FromPyObject>) -> PyResult<impl ToPyObject>
fn __lshift__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __rshift__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __and__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __or__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __xor__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
These methods are called to implement the binary arithmetic operations.
The reflected operations are also available:
fn __radd__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __rsub__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __rmul__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __rmatmul__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __rtruediv__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __rfloordiv__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __rmod__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __rdivmod__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __rpow__(lhs: impl FromPyObject, rhs: impl FromPyObject, modulo: Option<impl FromPyObject>) -> PyResult<impl ToPyObject>
fn __rlshift__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __rrshift__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __rand__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __ror__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
fn __rxor__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
The code generated for these methods expect that all arguments match the signature, or raise a TypeError.
fn __iadd__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
fn __isub__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
fn __imul__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
fn __imatmul__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
fn __itruediv__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
fn __ifloordiv__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
fn __imod__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
fn __ipow__(&'p mut self, other: impl FromPyObject, modulo: impl FromPyObject) -> PyResult<()>
on Python 3.8^fn __ipow__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
on Python 3.7 see https://bugs.python.org/issue36379fn __ilshift__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
fn __irshift__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
fn __iand__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
fn __ior__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
fn __ixor__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
The following methods implement the unary arithmetic operations:
fn __neg__(&'p self) -> PyResult<impl ToPyObject>
fn __pos__(&'p self) -> PyResult<impl ToPyObject>
fn __abs__(&'p self) -> PyResult<impl ToPyObject>
fn __invert__(&'p self) -> PyResult<impl ToPyObject>
Support for coercions:
fn __int__(&'p self) -> PyResult<impl ToPyObject>
fn __float__(&'p self) -> PyResult<impl ToPyObject>
fn __index__(&'p self) -> PyResult<impl ToPyObject>
Emulating sequential containers (such as lists or tuples)
The PySequenceProtocol
trait can be implemented to emulate
sequential container types.
For a sequence, the keys are the integers k for which 0 <= k < N, where N is the length of the sequence.
-
fn __len__(&self) -> PyResult<usize>
Implements the built-in function
len()
for the sequence. -
fn __getitem__(&self, idx: isize) -> PyResult<impl ToPyObject>
Implements evaluation of the
self[idx]
element. If theidx
value is outside the set of indexes for the sequence,IndexError
should be raised.Note: Negative integer indexes are handled as follows: if
__len__()
is defined, it is called and the sequence length is used to compute a positive index, which is passed to__getitem__()
. If__len__()
is not defined, the index is passed as is to the function. -
fn __setitem__(&mut self, idx: isize, value: impl FromPyObject) -> PyResult<()>
Implements assignment to the
self[idx]
element. Same note as for__getitem__()
. Should only be implemented if sequence elements can be replaced. -
fn __delitem__(&mut self, idx: isize) -> PyResult<()>
Implements deletion of the
self[idx]
element. Same note as for__getitem__()
. Should only be implemented if sequence elements can be deleted. -
fn __contains__(&self, item: impl FromPyObject) -> PyResult<bool>
Implements membership test operators. Should return true if
item
is inself
, false otherwise. For objects that don’t define__contains__()
, the membership test simply traverses the sequence until it finds a match. -
fn __concat__(&self, other: impl FromPyObject) -> PyResult<impl ToPyObject>
Concatenates two sequences. Used by the
+
operator, after trying the numeric addition via thePyNumberProtocol
trait method. -
fn __repeat__(&self, count: isize) -> PyResult<impl ToPyObject>
Repeats the sequence
count
times. Used by the*
operator, after trying the numeric multiplication via thePyNumberProtocol
trait method. -
fn __inplace_concat__(&mut self, other: impl FromPyObject) -> PyResult<Self>
Concatenates two sequences in place. Returns the modified first operand. Used by the
+=
operator, after trying the numeric in place addition via thePyNumberProtocol
trait method. -
fn __inplace_repeat__(&mut self, count: isize) -> PyResult<Self>
Repeats the sequence
count
times in place. Returns the modified first operand. Used by the*=
operator, after trying the numeric in place multiplication via thePyNumberProtocol
trait method.
Emulating mapping containers (such as dictionaries)
The PyMappingProtocol
trait allows to emulate
mapping container types.
For a mapping, the keys may be Python objects of arbitrary type.
-
fn __len__(&self) -> PyResult<usize>
Implements the built-in function
len()
for the mapping. -
fn __getitem__(&self, key: impl FromPyObject) -> PyResult<impl ToPyObject>
Implements evaluation of the
self[key]
element. Ifkey
is of an inappropriate type,TypeError
may be raised; ifkey
is missing (not in the container),KeyError
should be raised. -
fn __setitem__(&mut self, key: impl FromPyObject, value: impl FromPyObject) -> PyResult<()>
Implements assignment to the
self[key]
element or insertion of a newkey
mapping tovalue
. Should only be implemented if the mapping support changes to the values for keys, or if new keys can be added. The same exceptions should be raised for improper key values as for the__getitem__()
method. -
fn __delitem__(&mut self, key: impl FromPyObject) -> PyResult<()>
Implements deletion of the
self[key]
element. Should only be implemented if the mapping supports removal of keys. The same exceptions should be raised for improper key values as for the__getitem__()
method.
Iterator Types
Iterators can be defined using the PyIterProtocol
trait.
It includes two methods __iter__
and __next__
:
fn __iter__(slf: PyRefMut<'_, Self>) -> PyResult<impl IntoPy<PyObject>>
fn __next__(slf: PyRefMut<'_, Self>) -> PyResult<Option<impl IntoPy<PyObject>>>
These two methods can be take either PyRef<'_, Self>
or PyRefMut<'_, Self>
as their
first argument, so that mutable borrow can be avoided if needed.
For details, look at the #[pymethods]
regarding iterator methods.
Garbage Collector Integration
Implement the PyGCProtocol
trait for your struct.
For details, look at the #[pymethods]
regarding GC methods.
Basic object customization
Recall the Number
class from the previous chapter:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct Number(i32); #[pymethods] impl Number { #[new] fn new(value: i32) -> Self { Self(value) } } #[pymodule] fn my_module(_py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_class::<Number>()?; Ok(()) } }
At this point Python code can import the module, access the class and create class instances - but nothing else.
from my_module import Number
n = Number(5)
print(n)
<builtins.Number object at 0x000002B4D185D7D0>
String representations
It can't even print an user-readable representation of itself! We can fix that by defining the
__repr__
and __str__
methods inside a #[pymethods]
block. We do this by accessing the value
contained inside Number
.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct Number(i32); #[pymethods] impl Number { // For `__repr__` we want to return a string that Python code could use to recreate // the `Number`, like `Number(5)` for example. fn __repr__(&self) -> String { // We use the `format!` macro to create a string. Its first argument is a // format string, followed by any number of parameters which replace the // `{}`'s in the format string. // // 👇 Tuple field access in Rust uses a dot format!("Number({})", self.0) } // `__str__` is generally used to create an "informal" representation, so we // just forward to `i32`'s `ToString` trait implementation to print a bare number. fn __str__(&self) -> String { self.0.to_string() } } }
Hashing
Let's also implement hashing. We'll just hash the i32
. For that we need a Hasher
. The one
provided by std
is DefaultHasher
, which uses the SipHash algorithm.
#![allow(unused)] fn main() { use std::collections::hash_map::DefaultHasher; // Required to call the `.hash` and `.finish` methods, which are defined on traits. use std::hash::{Hash, Hasher}; use pyo3::prelude::*; #[pyclass] struct Number(i32); #[pymethods] impl Number { fn __hash__(&self) -> u64 { let mut hasher = DefaultHasher::new(); self.0.hash(&mut hasher); hasher.finish() } } }
Note: When implementing
__hash__
and comparisons, it is important that the following property holds:k1 == k2 -> hash(k1) == hash(k2)
In other words, if two keys are equal, their hashes must also be equal. In addition you must take care that your classes' hash doesn't change during its lifetime. In this tutorial we do that by not letting Python code change our
Number
class. In other words, it is immutable.By default, all
#[pyclass]
types have a default hash implementation from Python. Types which should not be hashable can override this by setting__hash__
to None. This is the same mechanism as for a pure-Python class. This is done like so:#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct NotHashable { } #[pymethods] impl NotHashable { #[classattr] const __hash__: Option<Py<PyAny>> = None; } }
Comparisons
Unlike in Python, PyO3 does not provide the magic comparison methods you might expect like __eq__
,
__lt__
and so on. Instead you have to implement all six operations at once with __richcmp__
.
This method will be called with a value of CompareOp
depending on the operation.
#![allow(unused)] fn main() { use pyo3::class::basic::CompareOp; use pyo3::prelude::*; #[pyclass] struct Number(i32); #[pymethods] impl Number { fn __richcmp__(&self, other: &Self, op: CompareOp) -> PyResult<bool> { match op { CompareOp::Lt => Ok(self.0 < other.0), CompareOp::Le => Ok(self.0 <= other.0), CompareOp::Eq => Ok(self.0 == other.0), CompareOp::Ne => Ok(self.0 != other.0), CompareOp::Gt => Ok(self.0 > other.0), CompareOp::Ge => Ok(self.0 >= other.0), } } } }
If you obtain the result by comparing two Rust values, as in this example, you
can take a shortcut using CompareOp::matches
:
#![allow(unused)] fn main() { use pyo3::class::basic::CompareOp; use pyo3::prelude::*; #[pyclass] struct Number(i32); #[pymethods] impl Number { fn __richcmp__(&self, other: &Self, op: CompareOp) -> bool { op.matches(self.0.cmp(&other.0)) } } }
It checks that the std::cmp::Ordering
obtained from Rust's Ord
matches
the given CompareOp
.
Alternatively, if you want to leave some operations unimplemented, you can
return py.NotImplemented()
for some of the operations:
#![allow(unused)] fn main() { use pyo3::class::basic::CompareOp; use pyo3::prelude::*; #[pyclass] struct Number(i32); #[pymethods] impl Number { fn __richcmp__(&self, other: &Self, op: CompareOp, py: Python<'_>) -> PyObject { match op { CompareOp::Eq => (self.0 == other.0).into_py(py), CompareOp::Ne => (self.0 != other.0).into_py(py), _ => py.NotImplemented(), } } } }
Truthyness
We'll consider Number
to be True
if it is nonzero:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct Number(i32); #[pymethods] impl Number { fn __bool__(&self) -> bool { self.0 != 0 } } }
Final code
#![allow(unused)] fn main() { use std::collections::hash_map::DefaultHasher; use std::hash::{Hash, Hasher}; use pyo3::prelude::*; use pyo3::class::basic::CompareOp; #[pyclass] struct Number(i32); #[pymethods] impl Number { #[new] fn new(value: i32) -> Self { Self(value) } fn __repr__(&self) -> String { format!("Number({})", self.0) } fn __str__(&self) -> String { self.0.to_string() } fn __hash__(&self) -> u64 { let mut hasher = DefaultHasher::new(); self.0.hash(&mut hasher); hasher.finish() } fn __richcmp__(&self, other: &Self, op: CompareOp) -> PyResult<bool> { match op { CompareOp::Lt => Ok(self.0 < other.0), CompareOp::Le => Ok(self.0 <= other.0), CompareOp::Eq => Ok(self.0 == other.0), CompareOp::Ne => Ok(self.0 != other.0), CompareOp::Gt => Ok(self.0 > other.0), CompareOp::Ge => Ok(self.0 >= other.0), } } fn __bool__(&self) -> bool { self.0 != 0 } } #[pymodule] fn my_module(_py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_class::<Number>()?; Ok(()) } }
Emulating numeric types
At this point we have a Number
class that we can't actually do any math on!
Before proceeding, we should think about how we want to handle overflows. There are three obvious solutions:
- We can have infinite precision just like Python's
int
. However that would be quite boring - we'd be reinventing the wheel. - We can raise exceptions whenever
Number
overflows, but that makes the API painful to use. - We can wrap around the boundary of
i32
. This is the approach we'll take here. To do that we'll just forward toi32
'swrapping_*
methods.
Fixing our constructor
Let's address the first overflow, in Number
's constructor:
from my_module import Number
n = Number(1 << 1337)
Traceback (most recent call last):
File "example.py", line 3, in <module>
n = Number(1 << 1337)
OverflowError: Python int too large to convert to C long
Instead of relying on the default FromPyObject
extraction to parse arguments, we can specify our
own extraction function, using the #[pyo3(from_py_with = "...")]
attribute. Unfortunately PyO3
doesn't provide a way to wrap Python integers out of the box, but we can do a Python call to mask it
and cast it to an i32
.
#![allow(unused)] fn main() { #![allow(dead_code)] use pyo3::prelude::*; fn wrap(obj: &PyAny) -> Result<i32, PyErr> { let val = obj.call_method1("__and__", (0xFFFFFFFF_u32,))?; let val: u32 = val.extract()?; // 👇 This intentionally overflows! Ok(val as i32) } }
We also add documentation, via ///
comments and the #[pyo3(text_signature = "...")]
attribute, both of which are visible to Python users.
#![allow(unused)] fn main() { #![allow(dead_code)] use pyo3::prelude::*; fn wrap(obj: &PyAny) -> Result<i32, PyErr> { let val = obj.call_method1("__and__", (0xFFFFFFFF_u32,))?; let val: u32 = val.extract()?; Ok(val as i32) } /// Did you ever hear the tragedy of Darth Signed The Overfloweth? I thought not. /// It's not a story C would tell you. It's a Rust legend. #[pyclass(module = "my_module")] #[pyo3(text_signature = "(int)")] struct Number(i32); #[pymethods] impl Number { #[new] fn new(#[pyo3(from_py_with = "wrap")] value: i32) -> Self { Self(value) } } }
With that out of the way, let's implement some operators:
#![allow(unused)] fn main() { use std::convert::TryInto; use pyo3::exceptions::{PyZeroDivisionError, PyValueError}; use pyo3::prelude::*; #[pyclass] struct Number(i32); #[pymethods] impl Number { fn __add__(&self, other: &Self) -> Self { Self(self.0.wrapping_add(other.0)) } fn __sub__(&self, other: &Self) -> Self { Self(self.0.wrapping_sub(other.0)) } fn __mul__(&self, other: &Self) -> Self { Self(self.0.wrapping_mul(other.0)) } fn __truediv__(&self, other: &Self) -> PyResult<Self> { match self.0.checked_div(other.0) { Some(i) => Ok(Self(i)), None => Err(PyZeroDivisionError::new_err("division by zero")), } } fn __floordiv__(&self, other: &Self) -> PyResult<Self> { match self.0.checked_div(other.0) { Some(i) => Ok(Self(i)), None => Err(PyZeroDivisionError::new_err("division by zero")), } } fn __rshift__(&self, other: &Self) -> PyResult<Self> { match other.0.try_into() { Ok(rhs) => Ok(Self(self.0.wrapping_shr(rhs))), Err(_) => Err(PyValueError::new_err("negative shift count")), } } fn __lshift__(&self, other: &Self) -> PyResult<Self> { match other.0.try_into() { Ok(rhs) => Ok(Self(self.0.wrapping_shl(rhs))), Err(_) => Err(PyValueError::new_err("negative shift count")), } } } }
Unary arithmetic operations
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct Number(i32); #[pymethods] impl Number { fn __pos__(slf: PyRef<'_, Self>) -> PyRef<'_, Self> { slf } fn __neg__(&self) -> Self { Self(-self.0) } fn __abs__(&self) -> Self { Self(self.0.abs()) } fn __invert__(&self) -> Self { Self(!self.0) } } }
Support for the complex()
, int()
and float()
built-in functions.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct Number(i32); use pyo3::types::PyComplex; #[pymethods] impl Number { fn __int__(&self) -> i32 { self.0 } fn __float__(&self) -> f64 { self.0 as f64 } fn __complex__<'py>(&self, py: Python<'py>) -> &'py PyComplex { PyComplex::from_doubles(py, self.0 as f64, 0.0) } } }
We do not implement the in-place operations like __iadd__
because we do not wish to mutate Number
.
Similarly we're not interested in supporting operations with different types, so we do not implement
the reflected operations like __radd__
either.
Now Python can use our Number
class:
from my_module import Number
def hash_djb2(s: str):
'''
A version of Daniel J. Bernstein's djb2 string hashing algorithm
Like many hashing algorithms, it relies on integer wrapping.
'''
n = Number(0)
five = Number(5)
for x in s:
n = Number(ord(x)) + ((n << five) - n)
return n
assert hash_djb2('l50_50') == Number(-1152549421)
Final code
use std::collections::hash_map::DefaultHasher; use std::hash::{Hash, Hasher}; use std::convert::TryInto; use pyo3::exceptions::{PyValueError, PyZeroDivisionError}; use pyo3::prelude::*; use pyo3::class::basic::CompareOp; use pyo3::types::PyComplex; fn wrap(obj: &PyAny) -> Result<i32, PyErr> { let val = obj.call_method1("__and__", (0xFFFFFFFF_u32,))?; let val: u32 = val.extract()?; Ok(val as i32) } /// Did you ever hear the tragedy of Darth Signed The Overfloweth? I thought not. /// It's not a story C would tell you. It's a Rust legend. #[pyclass(module = "my_module")] #[pyo3(text_signature = "(int)")] struct Number(i32); #[pymethods] impl Number { #[new] fn new(#[pyo3(from_py_with = "wrap")] value: i32) -> Self { Self(value) } fn __repr__(&self) -> String { format!("Number({})", self.0) } fn __str__(&self) -> String { self.0.to_string() } fn __hash__(&self) -> u64 { let mut hasher = DefaultHasher::new(); self.0.hash(&mut hasher); hasher.finish() } fn __richcmp__(&self, other: &Self, op: CompareOp) -> PyResult<bool> { match op { CompareOp::Lt => Ok(self.0 < other.0), CompareOp::Le => Ok(self.0 <= other.0), CompareOp::Eq => Ok(self.0 == other.0), CompareOp::Ne => Ok(self.0 != other.0), CompareOp::Gt => Ok(self.0 > other.0), CompareOp::Ge => Ok(self.0 >= other.0), } } fn __bool__(&self) -> bool { self.0 != 0 } fn __add__(&self, other: &Self) -> Self { Self(self.0.wrapping_add(other.0)) } fn __sub__(&self, other: &Self) -> Self { Self(self.0.wrapping_sub(other.0)) } fn __mul__(&self, other: &Self) -> Self { Self(self.0.wrapping_mul(other.0)) } fn __truediv__(&self, other: &Self) -> PyResult<Self> { match self.0.checked_div(other.0) { Some(i) => Ok(Self(i)), None => Err(PyZeroDivisionError::new_err("division by zero")), } } fn __floordiv__(&self, other: &Self) -> PyResult<Self> { match self.0.checked_div(other.0) { Some(i) => Ok(Self(i)), None => Err(PyZeroDivisionError::new_err("division by zero")), } } fn __rshift__(&self, other: &Self) -> PyResult<Self> { match other.0.try_into() { Ok(rhs) => Ok(Self(self.0.wrapping_shr(rhs))), Err(_) => Err(PyValueError::new_err("negative shift count")), } } fn __lshift__(&self, other: &Self) -> PyResult<Self> { match other.0.try_into() { Ok(rhs) => Ok(Self(self.0.wrapping_shl(rhs))), Err(_) => Err(PyValueError::new_err("negative shift count")), } } fn __xor__(&self, other: &Self) -> Self { Self(self.0 ^ other.0) } fn __or__(&self, other: &Self) -> Self { Self(self.0 | other.0) } fn __and__(&self, other: &Self) -> Self { Self(self.0 & other.0) } fn __int__(&self) -> i32 { self.0 } fn __float__(&self) -> f64 { self.0 as f64 } fn __complex__<'py>(&self, py: Python<'py>) -> &'py PyComplex { PyComplex::from_doubles(py, self.0 as f64, 0.0) } } #[pymodule] fn my_module(_py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_class::<Number>()?; Ok(()) } const SCRIPT: &'static str = r#" def hash_djb2(s: str): n = Number(0) five = Number(5) for x in s: n = Number(ord(x)) + ((n << five) - n) return n assert hash_djb2('l50_50') == Number(-1152549421) assert hash_djb2('logo') == Number(3327403) assert hash_djb2('horizon') == Number(1097468315) assert Number(2) + Number(2) == Number(4) assert Number(2) + Number(2) != Number(5) assert Number(13) - Number(7) == Number(6) assert Number(13) - Number(-7) == Number(20) assert Number(13) / Number(7) == Number(1) assert Number(13) // Number(7) == Number(1) assert Number(13) * Number(7) == Number(13*7) assert Number(13) > Number(7) assert Number(13) < Number(20) assert Number(13) == Number(13) assert Number(13) >= Number(7) assert Number(13) <= Number(20) assert Number(13) == Number(13) assert (True if Number(1) else False) assert (False if Number(0) else True) assert int(Number(13)) == 13 assert float(Number(13)) == 13 assert Number.__doc__ == "Did you ever hear the tragedy of Darth Signed The Overfloweth? I thought not.\nIt's not a story C would tell you. It's a Rust legend." assert Number(12345234523452) == Number(1498514748) try: import inspect assert inspect.signature(Number).__str__() == '(int)' except ValueError: # Not supported with `abi3` before Python 3.10 pass assert Number(1337).__str__() == '1337' assert Number(1337).__repr__() == 'Number(1337)' "#; use pyo3::PyTypeInfo; fn main() -> PyResult<()> { Python::with_gil(|py| -> PyResult<()> { let globals = PyModule::import(py, "__main__")?.dict(); globals.set_item("Number", Number::type_object(py))?; py.run(SCRIPT, Some(globals), None)?; Ok(()) }) }
Appendix: Writing some unsafe code
At the beginning of this chapter we said that PyO3 doesn't provide a way to wrap Python integers out of the box but that's a half truth. There's not a PyO3 API for it, but there's a Python C API function that does:
unsigned long PyLong_AsUnsignedLongMask(PyObject *obj)
We can call this function from Rust by using pyo3::ffi::PyLong_AsUnsignedLongMask
. This is an unsafe
function, which means we have to use an unsafe block to call it and take responsibility for upholding
the contracts of this function. Let's review those contracts:
- The GIL must be held. If it's not, calling this function causes a data race.
- The pointer must be valid, i.e. it must be properly aligned and point to a valid Python object.
Let's create that helper function. The signature has to be fn(&PyAny) -> PyResult<T>
.
&PyAny
represents a checked borrowed reference, so the pointer derived from it is valid (and not null).- Whenever we have borrowed references to Python objects in scope, it is guaranteed that the GIL is held. This reference is also where we can get a
Python
token to use in our call toPyErr::take
.
#![allow(unused)] fn main() { #![allow(dead_code)] use std::os::raw::c_ulong; use pyo3::prelude::*; use pyo3::ffi; use pyo3::conversion::AsPyPointer; fn wrap(obj: &PyAny) -> Result<i32, PyErr> { let py: Python<'_> = obj.py(); unsafe { let ptr = obj.as_ptr(); let ret: c_ulong = ffi::PyLong_AsUnsignedLongMask(ptr); if ret == c_ulong::MAX { if let Some(err) = PyErr::take(py) { return Err(err); } } Ok(ret as i32) } } }
Emulating callable objects
Classes can be callable if they have a #[pymethod]
named __call__
.
This allows instances of a class to behave similar to functions.
This method's signature must look like __call__(<self>, ...) -> object
- here,
any argument list can be defined as for normal pymethods
Example: Implementing a call counter
The following pyclass is a basic decorator - its constructor takes a Python object as argument and calls that object when called. An equivalent Python implementation is linked at the end.
An example crate containing this pyclass can be found here
use pyo3::prelude::*;
use pyo3::types::{PyDict, PyTuple};
use std::cell::Cell;
/// A function decorator that keeps track how often it is called.
///
/// It otherwise doesn't do anything special.
#[pyclass(name = "Counter")]
pub struct PyCounter {
// Keeps track of how many calls have gone through.
//
// See the discussion at the end for why `Cell` is used.
count: Cell<u64>,
// This is the actual function being wrapped.
wraps: Py<PyAny>,
}
#[pymethods]
impl PyCounter {
// Note that we don't validate whether `wraps` is actually callable.
//
// While we could use `PyAny::is_callable` for that, it has some flaws:
// 1. It doesn't guarantee the object can actually be called successfully
// 2. We still need to handle any exceptions that the function might raise
#[new]
fn __new__(wraps: Py<PyAny>) -> Self {
PyCounter {
count: Cell::new(0),
wraps,
}
}
#[getter]
fn count(&self) -> u64 {
self.count.get()
}
#[args(args = "*", kwargs = "**")]
fn __call__(
&self,
py: Python<'_>,
args: &PyTuple,
kwargs: Option<&PyDict>,
) -> PyResult<Py<PyAny>> {
let old_count = self.count.get();
let new_count = old_count + 1;
self.count.set(new_count);
let name = self.wraps.getattr(py, "__name__")?;
println!("{} has been called {} time(s).", name, new_count);
// After doing something, we finally forward the call to the wrapped function
let ret = self.wraps.call(py, args, kwargs)?;
// We could do something with the return value of
// the function before returning it
Ok(ret)
}
}
#[pymodule]
pub fn decorator(_py: Python<'_>, module: &PyModule) -> PyResult<()> {
module.add_class::<PyCounter>()?;
Ok(())
}
Python code:
@Counter
def say_hello():
print("hello")
say_hello()
say_hello()
say_hello()
say_hello()
assert say_hello.count == 4
Output:
say_hello has been called 1 time(s).
hello
say_hello has been called 2 time(s).
hello
say_hello has been called 3 time(s).
hello
say_hello has been called 4 time(s).
hello
Pure Python implementation
A Python implementation of this looks similar to the Rust version:
class Counter:
def __init__(self, wraps):
self.count = 0
self.wraps = wraps
def __call__(self, *args, **kwargs):
self.count += 1
print(f"{self.wraps.__name__} has been called {self.count} time(s)")
self.wraps(*args, **kwargs)
Note that it can also be implemented as a higher order function:
def Counter(wraps):
count = 0
def call(*args, **kwargs):
nonlocal count
count += 1
print(f"{wraps.__name__} has been called {count} time(s)")
return wraps(*args, **kwargs)
return call
What is the Cell
for?
A previous implementation used a normal u64
, which meant it required a &mut self
receiver to update the count:
#[args(args = "*", kwargs = "**")]
fn __call__(&mut self, py: Python<'_>, args: &PyTuple, kwargs: Option<&PyDict>) -> PyResult<Py<PyAny>> {
self.count += 1;
let name = self.wraps.getattr(py, "__name__")?;
println!("{} has been called {} time(s).", name, self.count);
// After doing something, we finally forward the call to the wrapped function
let ret = self.wraps.call(py, args, kwargs)?;
// We could do something with the return value of
// the function before returning it
Ok(ret)
}
The problem with this is that the &mut self
receiver means PyO3 has to borrow it exclusively,
and hold this borrow across theself.wraps.call(py, args, kwargs)
call. This call returns control to the user's Python code
which is free to call arbitrary things, including the decorated function. If that happens PyO3 is unable to create a second unique borrow and will be forced to raise an exception.
As a result, something innocent like this will raise an exception:
@Counter
def say_hello():
if say_hello.count < 2:
print(f"hello from decorator")
say_hello()
# RuntimeError: Already borrowed
The implementation in this chapter fixes that by never borrowing exclusively; all the methods take &self
as receivers, of which multiple may exist simultaneously. This requires a shared counter and the easiest way to do that is to use Cell
, so that's what is used here.
This shows the dangers of running arbitrary Python code - note that "running arbitrary Python code" can be far more subtle than the example above:
- Python's asynchronous executor may park the current thread in the middle of Python code, even in Python code that you control, and let other Python code run.
- Dropping arbitrary Python objects may invoke destructors defined in Python (
__del__
methods). - Calling Python's C-api (most PyO3 apis call C-api functions internally) may raise exceptions, which may allow Python code in signal handlers to run.
This is especially important if you are writing unsafe code; Python code must never be able to cause undefined behavior. You must ensure that your Rust code is in a consistent state before doing any of the above things.
Type conversions
In this portion of the guide we'll talk about the mapping of Python types to Rust types offered by PyO3, as well as the traits available to perform conversions between them.
Mapping of Rust types to Python types
When writing functions callable from Python (such as a #[pyfunction]
or in a #[pymethods]
block), the trait FromPyObject
is required for function arguments, and IntoPy<PyObject>
is required for function return values.
Consult the tables in the following section to find the Rust types provided by PyO3 which implement these traits.
Argument Types
When accepting a function argument, it is possible to either use Rust library types or PyO3's Python-native types. (See the next section for discussion on when to use each.)
The table below contains the Python type and the corresponding function argument types that will accept them:
Python | Rust | Rust (Python-native) |
---|---|---|
object | - | &PyAny |
str | String , Cow<str> , &str , OsString , PathBuf | &PyUnicode |
bytes | Vec<u8> , &[u8] | &PyBytes |
bool | bool | &PyBool |
int | Any integer type (i32 , u32 , usize , etc) | &PyLong |
float | f32 , f64 | &PyFloat |
complex | num_complex::Complex 1 | &PyComplex |
list[T] | Vec<T> | &PyList |
dict[K, V] | HashMap<K, V> , BTreeMap<K, V> , hashbrown::HashMap<K, V> 2, indexmap::IndexMap<K, V> 3 | &PyDict |
tuple[T, U] | (T, U) , Vec<T> | &PyTuple |
set[T] | HashSet<T> , BTreeSet<T> , hashbrown::HashSet<T> 2 | &PySet |
frozenset[T] | HashSet<T> , BTreeSet<T> , hashbrown::HashSet<T> 2 | &PyFrozenSet |
bytearray | Vec<u8> | &PyByteArray |
slice | - | &PySlice |
type | - | &PyType |
module | - | &PyModule |
datetime.datetime | - | &PyDateTime |
datetime.date | - | &PyDate |
datetime.time | - | &PyTime |
datetime.tzinfo | - | &PyTzInfo |
datetime.timedelta | - | &PyDelta |
typing.Optional[T] | Option<T> | - |
typing.Sequence[T] | Vec<T> | &PySequence |
typing.Mapping[K, V] | HashMap<K, V> , BTreeMap<K, V> , hashbrown::HashMap<K, V> 2, indexmap::IndexMap<K, V> 3 | &PyMapping |
typing.Iterator[Any] | - | &PyIterator |
typing.Union[...] | See #[derive(FromPyObject)] | - |
There are also a few special types related to the GIL and Rust-defined #[pyclass]
es which may come in useful:
What | Description |
---|---|
Python | A GIL token, used to pass to PyO3 constructors to prove ownership of the GIL |
Py<T> | A Python object isolated from the GIL lifetime. This can be sent to other threads. |
PyObject | An alias for Py<PyAny> |
&PyCell<T> | A #[pyclass] value owned by Python. |
PyRef<T> | A #[pyclass] borrowed immutably. |
PyRefMut<T> | A #[pyclass] borrowed mutably. |
For more detail on accepting #[pyclass]
values as function arguments, see the section of this guide on Python Classes.
Using Rust library types vs Python-native types
Using Rust library types as function arguments will incur a conversion cost compared to using the Python-native types. Using the Python-native types is almost zero-cost (they just require a type check similar to the Python builtin function isinstance()
).
However, once that conversion cost has been paid, the Rust standard library types offer a number of benefits:
- You can write functionality in native-speed Rust code (free of Python's runtime costs).
- You get better interoperability with the rest of the Rust ecosystem.
- You can use
Python::allow_threads
to release the Python GIL and let other Python threads make progress while your Rust code is executing. - You also benefit from stricter type checking. For example you can specify
Vec<i32>
, which will only accept a Pythonlist
containing integers. The Python-native equivalent,&PyList
, would accept a Pythonlist
containing Python objects of any type.
For most PyO3 usage the conversion cost is worth paying to get these benefits. As always, if you're not sure it's worth it in your case, benchmark it!
Returning Rust values to Python
When returning values from functions callable from Python, Python-native types (&PyAny
, &PyDict
etc.) can be used with zero cost.
Because these types are references, in some situations the Rust compiler may ask for lifetime annotations. If this is the case, you should use Py<PyAny>
, Py<PyDict>
etc. instead - which are also zero-cost. For all of these Python-native types T
, Py<T>
can be created from T
with an .into()
conversion.
If your function is fallible, it should return PyResult<T>
or Result<T, E>
where E
implements From<E> for PyErr
. This will raise a Python
exception if the Err
variant is returned.
Finally, the following Rust types are also able to convert to Python as return values:
Rust type | Resulting Python Type |
---|---|
String | str |
&str | str |
bool | bool |
Any integer type (i32 , u32 , usize , etc) | int |
f32 , f64 | float |
Option<T> | Optional[T] |
(T, U) | Tuple[T, U] |
Vec<T> | List[T] |
HashMap<K, V> | Dict[K, V] |
BTreeMap<K, V> | Dict[K, V] |
HashSet<T> | Set[T] |
BTreeSet<T> | Set[T] |
&PyCell<T: PyClass> | T |
PyRef<T: PyClass> | T |
PyRefMut<T: PyClass> | T |
Requires the num-complex
optional feature.
Requires the hashbrown
optional feature.
Requires the indexmap
optional feature.
Conversion traits
PyO3 provides some handy traits to convert between Python types and Rust types.
.extract()
and the FromPyObject
trait
The easiest way to convert a Python object to a Rust value is using
.extract()
. It returns a PyResult
with a type error if the conversion
fails, so usually you will use something like
use pyo3::prelude::*; use pyo3::types::PyList; fn main() -> PyResult<()> { Python::with_gil(|py| { let list = PyList::new(py, b"foo"); let v: Vec<i32> = list.extract()?; assert_eq!(&v, &[102, 111, 111]); Ok(()) }) }
This method is available for many Python object types, and can produce a wide
variety of Rust types, which you can check out in the implementor list of
FromPyObject
.
FromPyObject
is also implemented for your own Rust types wrapped as Python
objects (see the chapter about classes). There, in order to both be
able to operate on mutable references and satisfy Rust's rules of non-aliasing
mutable references, you have to extract the PyO3 reference wrappers PyRef
and PyRefMut
. They work like the reference wrappers of
std::cell::RefCell
and ensure (at runtime) that Rust borrows are allowed.
Deriving FromPyObject
FromPyObject
can be automatically derived for many kinds of structs and enums
if the member types themselves implement FromPyObject
. This even includes members
with a generic type T: FromPyObject
. Derivation for empty enums, enum variants and
structs is not supported.
Deriving FromPyObject
for structs
The derivation generates code that will attempt to access the attribute my_string
on
the Python object, i.e. obj.getattr("my_string")
, and call extract()
on the attribute.
use pyo3::prelude::*; #[derive(FromPyObject)] struct RustyStruct { my_string: String, } fn main() -> PyResult<()> { Python::with_gil(|py| -> PyResult<()> { let module = PyModule::from_code( py, "class Foo: def __init__(self): self.my_string = 'test'", "", "", )?; let class = module.getattr("Foo")?; let instance = class.call0()?; let rustystruct: RustyStruct = instance.extract()?; assert_eq!(rustystruct.my_string, "test"); Ok(()) }) }
By setting the #[pyo3(item)]
attribute on the field, PyO3 will attempt to extract the value by calling the get_item
method on the Python object.
use pyo3::prelude::*; #[derive(FromPyObject)] struct RustyStruct { #[pyo3(item)] my_string: String, } use pyo3::types::PyDict; fn main() -> PyResult<()> { Python::with_gil(|py| -> PyResult<()> { let dict = PyDict::new(py); dict.set_item("my_string", "test")?; let rustystruct: RustyStruct = dict.extract()?; assert_eq!(rustystruct.my_string, "test"); Ok(()) }) }
The argument passed to getattr
and get_item
can also be configured:
use pyo3::prelude::*; #[derive(FromPyObject)] struct RustyStruct { #[pyo3(item("key"))] string_in_mapping: String, #[pyo3(attribute("name"))] string_attr: String, } fn main() -> PyResult<()> { Python::with_gil(|py| -> PyResult<()> { let module = PyModule::from_code( py, "class Foo(dict): def __init__(self): self.name = 'test' self['key'] = 'test2'", "", "", )?; let class = module.getattr("Foo")?; let instance = class.call0()?; let rustystruct: RustyStruct = instance.extract()?; assert_eq!(rustystruct.string_attr, "test"); assert_eq!(rustystruct.string_in_mapping, "test2"); Ok(()) }) }
This tries to extract string_attr
from the attribute name
and string_in_mapping
from a mapping with the key "key"
. The arguments for attribute
are restricted to
non-empty string literals while item
can take any valid literal that implements
ToBorrowedObject
.
Deriving FromPyObject
for tuple structs
Tuple structs are also supported but do not allow customizing the extraction. The input is
always assumed to be a Python tuple with the same length as the Rust type, the n
th field
is extracted from the n
th item in the Python tuple.
use pyo3::prelude::*; #[derive(FromPyObject)] struct RustyTuple(String, String); use pyo3::types::PyTuple; fn main() -> PyResult<()> { Python::with_gil(|py| -> PyResult<()> { let tuple = PyTuple::new(py, vec!["test", "test2"]); let rustytuple: RustyTuple = tuple.extract()?; assert_eq!(rustytuple.0, "test"); assert_eq!(rustytuple.1, "test2"); Ok(()) }) }
Tuple structs with a single field are treated as wrapper types which are described in the following section. To override this behaviour and ensure that the input is in fact a tuple, specify the struct as
use pyo3::prelude::*; #[derive(FromPyObject)] struct RustyTuple((String,)); use pyo3::types::PyTuple; fn main() -> PyResult<()> { Python::with_gil(|py| -> PyResult<()> { let tuple = PyTuple::new(py, vec!["test"]); let rustytuple: RustyTuple = tuple.extract()?; assert_eq!((rustytuple.0).0, "test"); Ok(()) }) }
Deriving FromPyObject
for wrapper types
The pyo3(transparent)
attribute can be used on structs with exactly one field. This results
in extracting directly from the input object, i.e. obj.extract()
, rather than trying to access
an item or attribute. This behaviour is enabled per default for newtype structs and tuple-variants
with a single field.
use pyo3::prelude::*; #[derive(FromPyObject)] struct RustyTransparentTupleStruct(String); #[derive(FromPyObject)] #[pyo3(transparent)] struct RustyTransparentStruct { inner: String, } use pyo3::types::PyString; fn main() -> PyResult<()> { Python::with_gil(|py| -> PyResult<()> { let s = PyString::new(py, "test"); let tup: RustyTransparentTupleStruct = s.extract()?; assert_eq!(tup.0, "test"); let stru: RustyTransparentStruct = s.extract()?; assert_eq!(stru.inner, "test"); Ok(()) }) }
Deriving FromPyObject
for enums
The FromPyObject
derivation for enums generates code that tries to extract the variants in the
order of the fields. As soon as a variant can be extracted successfully, that variant is returned.
This makes it possible to extract Python union types like str | int
.
The same customizations and restrictions described for struct derivations apply to enum variants,
i.e. a tuple variant assumes that the input is a Python tuple, and a struct variant defaults to
extracting fields as attributes but can be configured in the same manner. The transparent
attribute can be applied to single-field-variants.
use pyo3::prelude::*; #[derive(FromPyObject)] #[derive(Debug)] enum RustyEnum<'a> { Int(usize), // input is a positive int String(String), // input is a string IntTuple(usize, usize), // input is a 2-tuple with positive ints StringIntTuple(String, usize), // input is a 2-tuple with String and int Coordinates3d { // needs to be in front of 2d x: usize, y: usize, z: usize, }, Coordinates2d { // only gets checked if the input did not have `z` #[pyo3(attribute("x"))] a: usize, #[pyo3(attribute("y"))] b: usize, }, #[pyo3(transparent)] CatchAll(&'a PyAny), // This extraction never fails } use pyo3::types::{PyBytes, PyString}; fn main() -> PyResult<()> { Python::with_gil(|py| -> PyResult<()> { { let thing = 42_u8.to_object(py); let rust_thing: RustyEnum<'_> = thing.extract(py)?; assert_eq!( 42, match rust_thing { RustyEnum::Int(i) => i, other => unreachable!("Error extracting: {:?}", other), } ); } { let thing = PyString::new(py, "text"); let rust_thing: RustyEnum<'_> = thing.extract()?; assert_eq!( "text", match rust_thing { RustyEnum::String(i) => i, other => unreachable!("Error extracting: {:?}", other), } ); } { let thing = (32_u8, 73_u8).to_object(py); let rust_thing: RustyEnum<'_> = thing.extract(py)?; assert_eq!( (32, 73), match rust_thing { RustyEnum::IntTuple(i, j) => (i, j), other => unreachable!("Error extracting: {:?}", other), } ); } { let thing = ("foo", 73_u8).to_object(py); let rust_thing: RustyEnum<'_> = thing.extract(py)?; assert_eq!( (String::from("foo"), 73), match rust_thing { RustyEnum::StringIntTuple(i, j) => (i, j), other => unreachable!("Error extracting: {:?}", other), } ); } { let module = PyModule::from_code( py, "class Foo(dict): def __init__(self): self.x = 0 self.y = 1 self.z = 2", "", "", )?; let class = module.getattr("Foo")?; let instance = class.call0()?; let rust_thing: RustyEnum<'_> = instance.extract()?; assert_eq!( (0, 1, 2), match rust_thing { RustyEnum::Coordinates3d { x, y, z } => (x, y, z), other => unreachable!("Error extracting: {:?}", other), } ); } { let module = PyModule::from_code( py, "class Foo(dict): def __init__(self): self.x = 3 self.y = 4", "", "", )?; let class = module.getattr("Foo")?; let instance = class.call0()?; let rust_thing: RustyEnum<'_> = instance.extract()?; assert_eq!( (3, 4), match rust_thing { RustyEnum::Coordinates2d { a, b } => (a, b), other => unreachable!("Error extracting: {:?}", other), } ); } { let thing = PyBytes::new(py, b"text"); let rust_thing: RustyEnum<'_> = thing.extract()?; assert_eq!( b"text", match rust_thing { RustyEnum::CatchAll(i) => i.downcast::<PyBytes>()?.as_bytes(), other => unreachable!("Error extracting: {:?}", other), } ); } Ok(()) }) }
If none of the enum variants match, a PyTypeError
containing the names of the
tested variants is returned. The names reported in the error message can be customized
through the #[pyo3(annotation = "name")]
attribute, e.g. to use conventional Python type
names:
use pyo3::prelude::*; #[derive(FromPyObject)] #[derive(Debug)] enum RustyEnum { #[pyo3(transparent, annotation = "str")] String(String), #[pyo3(transparent, annotation = "int")] Int(isize), } fn main() -> PyResult<()> { Python::with_gil(|py| -> PyResult<()> { { let thing = 42_u8.to_object(py); let rust_thing: RustyEnum = thing.extract(py)?; assert_eq!( 42, match rust_thing { RustyEnum::Int(i) => i, other => unreachable!("Error extracting: {:?}", other), } ); } { let thing = "foo".to_object(py); let rust_thing: RustyEnum = thing.extract(py)?; assert_eq!( "foo", match rust_thing { RustyEnum::String(i) => i, other => unreachable!("Error extracting: {:?}", other), } ); } { let thing = b"foo".to_object(py); let error = thing.extract::<RustyEnum>(py).unwrap_err(); assert!(error.is_instance_of::<pyo3::exceptions::PyTypeError>(py)); } Ok(()) }) }
If the input is neither a string nor an integer, the error message will be:
"'<INPUT_TYPE>' cannot be converted to 'str | int'"
.
#[derive(FromPyObject)]
Container Attributes
pyo3(transparent)
- extract the field directly from the object as
obj.extract()
instead ofget_item()
orgetattr()
- Newtype structs and tuple-variants are treated as transparent per default.
- only supported for single-field structs and enum variants
- extract the field directly from the object as
pyo3(annotation = "name")
- changes the name of the failed variant in the generated error message in case of failure.
- e.g.
pyo3("int")
reports the variant's type asint
. - only supported for enum variants
#[derive(FromPyObject)]
Field Attributes
pyo3(attribute)
,pyo3(attribute("name"))
- retrieve the field from an attribute, possibly with a custom name specified as an argument
- argument must be a string-literal.
pyo3(item)
,pyo3(item("key"))
- retrieve the field from a mapping, possibly with the custom key specified as an argument.
- can be any literal that implements
ToBorrowedObject
pyo3(from_py_with = "...")
- apply a custom function to convert the field from Python the desired Rust type.
- the argument must be the name of the function as a string.
- the function signature must be
fn(&PyAny) -> PyResult<T>
whereT
is the Rust type of the argument.
IntoPy<T>
This trait defines the to-python conversion for a Rust type. It is usually implemented as
IntoPy<PyObject>
, which is the trait needed for returning a value from #[pyfunction]
and
#[pymethods]
.
All types in PyO3 implement this trait, as does a #[pyclass]
which doesn't use extends
.
Occasionally you may choose to implement this for custom types which are mapped to Python types without having a unique python type.
#![allow(unused)] fn main() { use pyo3::prelude::*; struct MyPyObjectWrapper(PyObject); impl IntoPy<PyObject> for MyPyObjectWrapper { fn into_py(self, py: Python<'_>) -> PyObject { self.0 } } }
The ToPyObject
trait
ToPyObject
is a conversion trait that allows various objects to be
converted into PyObject
. IntoPy<PyObject>
serves the
same purpose, except that it consumes self
.
Python exceptions
Defining a new exception
You can use the create_exception!
macro to define a new exception type:
#![allow(unused)] fn main() { use pyo3::create_exception; create_exception!(module, MyError, pyo3::exceptions::PyException); }
module
is the name of the containing module.MyError
is the name of the new exception type.
For example:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::create_exception; use pyo3::types::IntoPyDict; use pyo3::exceptions::PyException; create_exception!(mymodule, CustomError, PyException); Python::with_gil(|py| { let ctx = [("CustomError", py.get_type::<CustomError>())].into_py_dict(py); pyo3::py_run!(py, *ctx, "assert str(CustomError) == \"<class 'mymodule.CustomError'>\""); pyo3::py_run!(py, *ctx, "assert CustomError('oops').args == ('oops',)"); }); }
When using PyO3 to create an extension module, you can add the new exception to the module like this, so that it is importable from Python:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::PyModule; use pyo3::exceptions::PyException; pyo3::create_exception!(mymodule, CustomError, PyException); #[pymodule] fn mymodule(py: Python<'_>, m: &PyModule) -> PyResult<()> { // ... other elements added to module ... m.add("CustomError", py.get_type::<CustomError>())?; Ok(()) } }
Raising an exception
As described in the function error handling chapter, to raise an exception from a #[pyfunction]
or #[pymethods]
, return an Err(PyErr)
. PyO3 will automatically raise this exception for you when returing the result to Python.
You can also manually write and fetch errors in the Python interpreter's global state:
#![allow(unused)] fn main() { use pyo3::{Python, PyErr}; use pyo3::exceptions::PyTypeError; Python::with_gil(|py| { PyTypeError::new_err("Error").restore(py); assert!(PyErr::occurred(py)); drop(PyErr::fetch(py)); }); }
Checking exception types
Python has an isinstance
method to check an object's type.
In PyO3 every object has the PyAny::is_instance
and PyAny::is_instance_of
methods which do the same thing.
#![allow(unused)] fn main() { use pyo3::Python; use pyo3::types::{PyBool, PyList}; Python::with_gil(|py| { assert!(PyBool::new(py, true).is_instance_of::<PyBool>().unwrap()); let list = PyList::new(py, &[1, 2, 3, 4]); assert!(!list.is_instance_of::<PyBool>().unwrap()); assert!(list.is_instance_of::<PyList>().unwrap()); }); }
To check the type of an exception, you can similarly do:
#![allow(unused)] fn main() { use pyo3::exceptions::PyTypeError; use pyo3::prelude::*; Python::with_gil(|py| { let err = PyTypeError::new_err(()); err.is_instance_of::<PyTypeError>(py); }); }
Using exceptions defined in Python code
It is possible to use an exception defined in Python code as a native Rust type.
The import_exception!
macro allows importing a specific exception class and defines a Rust type
for that exception.
#![allow(unused)] #![allow(dead_code)] fn main() { use pyo3::prelude::*; mod io { pyo3::import_exception!(io, UnsupportedOperation); } fn tell(file: &PyAny) -> PyResult<u64> { match file.call_method0("tell") { Err(_) => Err(io::UnsupportedOperation::new_err("not supported: tell")), Ok(x) => x.extract::<u64>(), } } }
pyo3::exceptions
defines exceptions for several standard library modules.
Calling Python in Rust code
This chapter of the guide documents some ways to interact with Python code from Rust:
- How to call Python functions
- How to execute existing Python code
Calling Python functions
Any Python-native object reference (such as &PyAny
, &PyList
, or &PyCell<MyClass>
) can be used to call Python functions.
PyO3 offers two APIs to make function calls:
call
- call any callable Python object.call_method
- call a method on the Python object.
Both of these APIs take args
and kwargs
arguments (for positional and keyword arguments respectively). There are variants for less complex calls:
call1
andcall_method1
to call only with positionalargs
.call0
andcall_method0
to call with no arguments.
For convenience the Py<T>
smart pointer also exposes these same six API methods, but needs a Python
token as an additional first argument to prove the GIL is held.
The example below calls a Python function behind a PyObject
(aka Py<PyAny>
) reference:
use pyo3::prelude::*; use pyo3::types::PyTuple; fn main() -> PyResult<()> { let arg1 = "arg1"; let arg2 = "arg2"; let arg3 = "arg3"; Python::with_gil(|py| { let fun: Py<PyAny> = PyModule::from_code( py, "def example(*args, **kwargs): if args != (): print('called with args', args) if kwargs != {}: print('called with kwargs', kwargs) if args == () and kwargs == {}: print('called with no arguments')", "", "", )?.getattr("example")?.into(); // call object without any arguments fun.call0(py)?; // call object with PyTuple let args = PyTuple::new(py, &[arg1, arg2, arg3]); fun.call1(py, args)?; // pass arguments as rust tuple let args = (arg1, arg2, arg3); fun.call1(py, args)?; Ok(()) }) }
Creating keyword arguments
For the call
and call_method
APIs, kwargs
can be None
or Some(&PyDict)
. You can use the IntoPyDict
trait to convert other dict-like containers, e.g. HashMap
or BTreeMap
, as well as tuples with up to 10 elements and Vec
s where each element is a two-element tuple.
use pyo3::prelude::*; use pyo3::types::IntoPyDict; use std::collections::HashMap; fn main() -> PyResult<()> { let key1 = "key1"; let val1 = 1; let key2 = "key2"; let val2 = 2; Python::with_gil(|py| { let fun: Py<PyAny> = PyModule::from_code( py, "def example(*args, **kwargs): if args != (): print('called with args', args) if kwargs != {}: print('called with kwargs', kwargs) if args == () and kwargs == {}: print('called with no arguments')", "", "", )?.getattr("example")?.into(); // call object with PyDict let kwargs = [(key1, val1)].into_py_dict(py); fun.call(py, (), Some(kwargs))?; // pass arguments as Vec let kwargs = vec![(key1, val1), (key2, val2)]; fun.call(py, (), Some(kwargs.into_py_dict(py)))?; // pass arguments as HashMap let mut kwargs = HashMap::<&str, i32>::new(); kwargs.insert(key1, 1); fun.call(py, (), Some(kwargs.into_py_dict(py)))?; Ok(()) }) }
Executing existing Python code
If you already have some existing Python code that you need to execute from Rust, the following FAQs can help you select the right PyO3 functionality for your situation:
Want to access Python APIs? Then use PyModule::import
.
Pymodule::import
can
be used to get handle to a Python module from Rust. You can use this to import and use any Python
module available in your environment.
use pyo3::prelude::*; fn main() -> PyResult<()> { Python::with_gil(|py| { let builtins = PyModule::import(py, "builtins")?; let total: i32 = builtins.getattr("sum")?.call1((vec![1, 2, 3],))?.extract()?; assert_eq!(total, 6); Ok(()) }) }
Want to run just an expression? Then use eval
.
Python::eval
is
a method to execute a Python expression
and return the evaluated value as a &PyAny
object.
use pyo3::prelude::*; fn main() -> Result<(), ()> { Python::with_gil(|py| { let result = py.eval("[i * 10 for i in range(5)]", None, None).map_err(|e| { e.print_and_set_sys_last_vars(py); })?; let res: Vec<i64> = result.extract().unwrap(); assert_eq!(res, vec![0, 10, 20, 30, 40]); Ok(()) }) }
Want to run statements? Then use run
.
Python::run
is a method to execute one or more
Python statements.
This method returns nothing (like any Python statement), but you can get
access to manipulated objects via the locals
dict.
You can also use the py_run!
macro, which is a shorthand for Python::run
.
Since py_run!
panics on exceptions, we recommend you use this macro only for
quickly testing your Python extensions.
use pyo3::prelude::*; use pyo3::{PyCell, py_run}; fn main() { #[pyclass] struct UserData { id: u32, name: String, } #[pymethods] impl UserData { fn as_tuple(&self) -> (u32, String) { (self.id, self.name.clone()) } fn __repr__(&self) -> PyResult<String> { Ok(format!("User {}(id: {})", self.name, self.id)) } } Python::with_gil(|py| { let userdata = UserData { id: 34, name: "Yu".to_string(), }; let userdata = PyCell::new(py, userdata).unwrap(); let userdata_as_tuple = (34, "Yu"); py_run!(py, userdata userdata_as_tuple, r#" assert repr(userdata) == "User Yu(id: 34)" assert userdata.as_tuple() == userdata_as_tuple "#); }) }
You have a Python file or code snippet? Then use PyModule::from_code
.
PyModule::from_code
can be used to generate a Python module which can then be used just as if it was imported with
PyModule::import
.
Warning: This will compile and execute code. Never pass untrusted code to this function!
use pyo3::{prelude::*, types::{IntoPyDict, PyModule}}; fn main() -> PyResult<()> { Python::with_gil(|py| { let activators = PyModule::from_code(py, r#" def relu(x): """see https://en.wikipedia.org/wiki/Rectifier_(neural_networks)""" return max(0.0, x) def leaky_relu(x, slope=0.01): return x if x >= 0 else x * slope "#, "activators.py", "activators")?; let relu_result: f64 = activators.getattr("relu")?.call1((-1.0,))?.extract()?; assert_eq!(relu_result, 0.0); let kwargs = [("slope", 0.2)].into_py_dict(py); let lrelu_result: f64 = activators .getattr("leaky_relu")?.call((-1.0,), Some(kwargs))? .extract()?; assert_eq!(lrelu_result, -0.2); Ok(()) }) }
Include multiple Python files
You can include a file at compile time by using
std::include_str
macro.
Or you can load a file at runtime by using
std::fs::read_to_string
function.
Many Python files can be included and loaded as modules. If one file depends on
another you must preserve correct order while declaring PyModule
.
Example directory structure:
.
├── Cargo.lock
├── Cargo.toml
├── python_app
│ ├── app.py
│ └── utils
│ └── foo.py
└── src
└── main.rs
python_app/app.py
:
from utils.foo import bar
def run():
return bar()
python_app/utils/foo.py
:
def bar():
return "baz"
The example below shows:
- how to include content of
app.py
andutils/foo.py
into your rust binary - how to call function
run()
(declared inapp.py
) that needs function imported fromutils/foo.py
src/main.rs
:
use pyo3::prelude::*;
fn main() -> PyResult<()> {
let py_foo = include_str!(concat!(env!("CARGO_MANIFEST_DIR"), "/python_app/utils/foo.py"));
let py_app = include_str!(concat!(env!("CARGO_MANIFEST_DIR"), "/python_app/app.py"));
let from_python = Python::with_gil(|py| -> PyResult<Py<PyAny>> {
PyModule::from_code(py, py_foo, "utils.foo", "utils.foo")?;
let app: Py<PyAny> = PyModule::from_code(py, py_app, "", "")?
.getattr("run")?
.into();
app.call0(py)
});
println!("py: {}", from_python?);
Ok(())
}
The example below shows:
- how to load content of
app.py
at runtime so that it sees its dependencies automatically - how to call function
run()
(declared inapp.py
) that needs function imported fromutils/foo.py
It is recommended to use absolute paths because then your binary can be run
from anywhere as long as your app.py
is in the expected directory (in this example
that directory is /usr/share/python_app
).
src/main.rs
:
use pyo3::prelude::*; use pyo3::types::PyList; use std::fs; use std::path::Path; fn main() -> PyResult<()> { let path = Path::new("/usr/share/python_app"); let py_app = fs::read_to_string(path.join("app.py"))?; let from_python = Python::with_gil(|py| -> PyResult<Py<PyAny>> { let syspath: &PyList = py.import("sys")?.getattr("path")?.downcast::<PyList>()?; syspath.insert(0, &path)?; let app: Py<PyAny> = PyModule::from_code(py, &py_app, "", "")? .getattr("run")? .into(); app.call0(py) }); println!("py: {}", from_python?); Ok(()) }
Need to use a context manager from Rust?
Use context managers by directly invoking __enter__
and __exit__
.
use pyo3::prelude::*; use pyo3::types::PyModule; fn main() { Python::with_gil(|py| { let custom_manager = PyModule::from_code(py, r#" class House(object): def __init__(self, address): self.address = address def __enter__(self): print(f"Welcome to {self.address}!") def __exit__(self, type, value, traceback): if type: print(f"Sorry you had {type} trouble at {self.address}") else: print(f"Thank you for visiting {self.address}, come again soon!") "#, "house.py", "house").unwrap(); let house_class = custom_manager.getattr("House").unwrap(); let house = house_class.call1(("123 Main Street",)).unwrap(); house.call_method0("__enter__").unwrap(); let result = py.eval("undefined_variable + 1", None, None); // If the eval threw an exception we'll pass it through to the context manager. // Otherwise, __exit__ is called with empty arguments (Python "None"). match result { Ok(_) => { let none = py.None(); house.call_method1("__exit__", (&none, &none, &none)).unwrap(); }, Err(e) => { house.call_method1( "__exit__", (e.get_type(py), e.value(py), e.traceback(py)) ).unwrap(); } } }) }
GIL lifetimes, mutability and Python object types
On first glance, PyO3 provides a huge number of different types that can be used to wrap or refer to Python objects. This page delves into the details and gives an overview of their intended meaning, with examples when each type is best used.
Mutability and Rust types
Since Python has no concept of ownership, and works solely with boxed objects, any Python object can be referenced any number of times, and mutation is allowed from any reference.
The situation is helped a little by the Global Interpreter Lock (GIL), which ensures that only one thread can use the Python interpreter and its API at the same time, while non-Python operations (system calls and extension code) can unlock the GIL. (See the section on parallelism for how to do that in PyO3.)
In PyO3, holding the GIL is modeled by acquiring a token of the type
Python<'py>
, which serves three purposes:
- It provides some global API for the Python interpreter, such as
eval
. - It can be passed to functions that require a proof of holding the GIL,
such as
Py::clone_ref
. - Its lifetime can be used to create Rust references that implicitly guarantee
holding the GIL, such as
&'py PyAny
.
The latter two points are the reason why some APIs in PyO3 require the py: Python
argument, while others don't.
The PyO3 API for Python objects is written such that instead of requiring a
mutable Rust reference for mutating operations such as
PyList::append
, a shared reference (which, in turn, can only
be created through Python<'_>
with a GIL lifetime) is sufficient.
However, Rust structs wrapped as Python objects (called pyclass
types) usually
do need &mut
access. Due to the GIL, PyO3 can guarantee thread-safe acces
to them, but it cannot statically guarantee uniqueness of &mut
references once
an object's ownership has been passed to the Python interpreter, ensuring
references is done at runtime using PyCell
, a scheme very similar to
std::cell::RefCell
.
Object types
PyAny
Represents: a Python object of unspecified type, restricted to a GIL
lifetime. Currently, PyAny
can only ever occur as a reference, &PyAny
.
Used: Whenever you want to refer to some Python object and will have the
GIL for the whole duration you need to access that object. For example,
intermediate values and arguments to pyfunction
s or pymethod
s implemented
in Rust where any type is allowed.
Many general methods for interacting with Python objects are on the PyAny
struct,
such as getattr
, setattr
, and .call
.
Conversions:
For a &PyAny
object reference any
where the underlying object is a Python-native type such as
a list:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::PyList; Python::with_gil(|py| -> PyResult<()> { let obj: &PyAny = PyList::empty(py); // To &PyList with PyAny::downcast let _: &PyList = obj.downcast()?; // To Py<PyAny> (aka PyObject) with .into() let _: Py<PyAny> = obj.into(); // To Py<PyList> with PyAny::extract let _: Py<PyList> = obj.extract()?; Ok(()) }).unwrap(); }
For a &PyAny
object reference any
where the underlying object is a #[pyclass]
:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::{Py, Python, PyAny, PyResult}; #[pyclass] #[derive(Clone)] struct MyClass { } Python::with_gil(|py| -> PyResult<()> { let obj: &PyAny = Py::new(py, MyClass { })?.into_ref(py); // To &PyCell<MyClass> with PyAny::downcast let _: &PyCell<MyClass> = obj.downcast()?; // To Py<PyAny> (aka PyObject) with .into() let _: Py<PyAny> = obj.into(); // To Py<MyClass> with PyAny::extract let _: Py<MyClass> = obj.extract()?; // To MyClass with PyAny::extract, if MyClass: Clone let _: MyClass = obj.extract()?; // To PyRef<'_, MyClass> or PyRefMut<'_, MyClass> with PyAny::extract let _: PyRef<'_, MyClass> = obj.extract()?; let _: PyRefMut<'_, MyClass> = obj.extract()?; Ok(()) }).unwrap(); }
PyTuple
, PyDict
, and many more
Represents: a native Python object of known type, restricted to a GIL
lifetime just like PyAny
.
Used: Whenever you want to operate with native Python types while holding
the GIL. Like PyAny
, this is the most convenient form to use for function
arguments and intermediate values.
These types all implement Deref<Target = PyAny>
, so they all expose the same
methods which can be found on PyAny
.
To see all Python types exposed by PyO3
you should consult the
pyo3::types
module.
Conversions:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::PyList; Python::with_gil(|py| -> PyResult<()> { let list = PyList::empty(py); // Use methods from PyAny on all Python types with Deref implementation let _ = list.repr()?; // To &PyAny automatically with Deref implementation let _: &PyAny = list; // To &PyAny explicitly with .as_ref() let _: &PyAny = list.as_ref(); // To Py<T> with .into() or Py::from() let _: Py<PyList> = list.into(); // To PyObject with .into() or .to_object(py) let _: PyObject = list.into(); Ok(()) }).unwrap(); }
Py<T>
and PyObject
Represents: a GIL-independent reference to a Python object. This can be a Python native type
(like PyTuple
), or a pyclass
type implemented in Rust. The most commonly-used variant,
Py<PyAny>
, is also known as PyObject
.
Used: Whenever you want to carry around references to a Python object without caring about a GIL lifetime. For example, storing Python object references in a Rust struct that outlives the Python-Rust FFI boundary, or returning objects from functions implemented in Rust back to Python.
Can be cloned using Python reference counts with .clone()
.
Conversions:
For a Py<PyList>
, the conversions are as below:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::PyList; Python::with_gil(|py| { let list: Py<PyList> = PyList::empty(py).into(); // To &PyList with Py::as_ref() (borrows from the Py) let _: &PyList = list.as_ref(py); let list_clone = list.clone(); // Because `.into_ref()` will consume `list`. // To &PyList with Py::into_ref() (moves the pointer into PyO3's object storage) let _: &PyList = list.into_ref(py); let list = list_clone; // To Py<PyAny> (aka PyObject) with .into() let _: Py<PyAny> = list.into(); }) }
For a #[pyclass] struct MyClass
, the conversions for Py<MyClass>
are below:
#![allow(unused)] fn main() { use pyo3::prelude::*; Python::with_gil(|py| { #[pyclass] struct MyClass { } Python::with_gil(|py| -> PyResult<()> { let my_class: Py<MyClass> = Py::new(py, MyClass { })?; // To &PyCell<MyClass> with Py::as_ref() (borrows from the Py) let _: &PyCell<MyClass> = my_class.as_ref(py); let my_class_clone = my_class.clone(); // Because `.into_ref()` will consume `my_class`. // To &PyCell<MyClass> with Py::into_ref() (moves the pointer into PyO3's object storage) let _: &PyCell<MyClass> = my_class.into_ref(py); let my_class = my_class_clone.clone(); // To Py<PyAny> (aka PyObject) with .into_py(py) let _: Py<PyAny> = my_class.into_py(py); let my_class = my_class_clone; // To PyRef<'_, MyClass> with Py::borrow or Py::try_borrow let _: PyRef<'_, MyClass> = my_class.try_borrow(py)?; // To PyRefMut<'_, MyClass> with Py::borrow_mut or Py::try_borrow_mut let _: PyRefMut<'_, MyClass> = my_class.try_borrow_mut(py)?; Ok(()) }).unwrap(); }); }
PyCell<SomeType>
Represents: a reference to a Rust object (instance of PyClass
) which is
wrapped in a Python object. The cell part is an analog to stdlib's
RefCell
to allow access to &mut
references.
Used: for accessing pure-Rust API of the instance (members and functions
taking &SomeType
or &mut SomeType
) while maintaining the aliasing rules of
Rust references.
Like pyo3's Python native types, PyCell<T>
implements Deref<Target = PyAny>
,
so it also exposes all of the methods on PyAny
.
Conversions:
PyCell<T>
can be used to access &T
and &mut T
via PyRef<T>
and PyRefMut<T>
respectively.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass { } Python::with_gil(|py| -> PyResult<()> { let cell: &PyCell<MyClass> = PyCell::new(py, MyClass { })?; // To PyRef<T> with .borrow() or .try_borrow() let py_ref: PyRef<'_, MyClass> = cell.try_borrow()?; let _: &MyClass = &*py_ref; drop(py_ref); // To PyRefMut<T> with .borrow_mut() or .try_borrow_mut() let mut py_ref_mut: PyRefMut<'_, MyClass> = cell.try_borrow_mut()?; let _: &mut MyClass = &mut *py_ref_mut; Ok(()) }).unwrap(); }
PyCell<T>
can also be accessed like a Python-native type.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass { } Python::with_gil(|py| -> PyResult<()> { let cell: &PyCell<MyClass> = PyCell::new(py, MyClass { })?; // Use methods from PyAny on PyCell<T> with Deref implementation let _ = cell.repr()?; // To &PyAny automatically with Deref implementation let _: &PyAny = cell; // To &PyAny explicitly with .as_ref() let _: &PyAny = cell.as_ref(); Ok(()) }).unwrap(); }
PyRef<SomeType>
and PyRefMut<SomeType>
Represents: reference wrapper types employed by PyCell
to keep track of
borrows, analog to Ref
and RefMut
used by RefCell
.
Used: while borrowing a PyCell
. They can also be used with .extract()
on types like Py<T>
and PyAny
to get a reference quickly.
Related traits and types
PyClass
This trait marks structs defined in Rust that are also usable as Python classes,
usually defined using the #[pyclass]
macro.
PyNativeType
This trait marks structs that mirror native Python types, such as PyList
.
Parallelism
CPython has the infamous Global Interpreter Lock, which prevents several threads from executing Python bytecode in parallel. This makes threading in Python a bad fit for CPU-bound tasks and often forces developers to accept the overhead of multiprocessing.
In PyO3 parallelism can be easily achieved in Rust-only code. Let's take a look at our word-count example, where we have a search
function that utilizes the rayon crate to count words in parallel.
#![allow(unused)] fn main() { #![allow(dead_code)] use pyo3::prelude::*; // These traits let us use `par_lines` and `map`. use rayon::str::ParallelString; use rayon::iter::ParallelIterator; /// Count the occurrences of needle in line, case insensitive fn count_line(line: &str, needle: &str) -> usize { let mut total = 0; for word in line.split(' ') { if word == needle { total += 1; } } total } #[pyfunction] fn search(contents: &str, needle: &str) -> usize { contents .par_lines() .map(|line| count_line(line, needle)) .sum() } }
But let's assume you have a long running Rust function which you would like to execute several times in parallel. For the sake of example let's take a sequential version of the word count:
#![allow(unused)] fn main() { #![allow(dead_code)] fn count_line(line: &str, needle: &str) -> usize { let mut total = 0; for word in line.split(' ') { if word == needle { total += 1; } } total } fn search_sequential(contents: &str, needle: &str) -> usize { contents.lines().map(|line| count_line(line, needle)).sum() } }
To enable parallel execution of this function, the Python::allow_threads
method can be used to temporarily release the GIL, thus allowing other Python threads to run. We then have a function exposed to the Python runtime which calls search_sequential
inside a closure passed to Python::allow_threads
to enable true parallelism:
#![allow(unused)] fn main() { #![allow(dead_code)] use pyo3::prelude::*; fn count_line(line: &str, needle: &str) -> usize { let mut total = 0; for word in line.split(' ') { if word == needle { total += 1; } } total } fn search_sequential(contents: &str, needle: &str) -> usize { contents.lines().map(|line| count_line(line, needle)).sum() } #[pyfunction] fn search_sequential_allow_threads(py: Python<'_>, contents: &str, needle: &str) -> usize { py.allow_threads(|| search_sequential(contents, needle)) } }
Now Python threads can use more than one CPU core, resolving the limitation which usually makes multi-threading in Python only good for IO-bound tasks:
from concurrent.futures import ThreadPoolExecutor
from word_count import search_sequential_allow_threads
executor = ThreadPoolExecutor(max_workers=2)
future_1 = executor.submit(
word_count.search_sequential_allow_threads, contents, needle
)
future_2 = executor.submit(
word_count.search_sequential_allow_threads, contents, needle
)
result_1 = future_1.result()
result_2 = future_2.result()
Benchmark
Let's benchmark the word-count
example to verify that we really did unlock parallelism with PyO3.
We are using pytest-benchmark
to benchmark four word count functions:
- Pure Python version
- Rust parallel version
- Rust sequential version
- Rust sequential version executed twice with two Python threads
The benchmark script can be found here, and we can run nox
in the word-count
folder to benchmark these functions.
While the results of the benchmark of course depend on your machine, the relative results should be similar to this (mid 2020):
-------------------------------------------------------------------------------------------------- benchmark: 4 tests -------------------------------------------------------------------------------------------------
Name (time in ms) Min Max Mean StdDev Median IQR Outliers OPS Rounds Iterations
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_word_count_rust_parallel 1.7315 (1.0) 4.6495 (1.0) 1.9972 (1.0) 0.4299 (1.0) 1.8142 (1.0) 0.2049 (1.0) 40;46 500.6943 (1.0) 375 1
test_word_count_rust_sequential 7.3348 (4.24) 10.3556 (2.23) 8.0035 (4.01) 0.7785 (1.81) 7.5597 (4.17) 0.8641 (4.22) 26;5 124.9457 (0.25) 121 1
test_word_count_rust_sequential_twice_with_threads 7.9839 (4.61) 10.3065 (2.22) 8.4511 (4.23) 0.4709 (1.10) 8.2457 (4.55) 0.3927 (1.92) 17;17 118.3274 (0.24) 114 1
test_word_count_python_sequential 27.3985 (15.82) 45.4527 (9.78) 28.9604 (14.50) 4.1449 (9.64) 27.5781 (15.20) 0.4638 (2.26) 3;5 34.5299 (0.07) 35 1
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
You can see that the Python threaded version is not much slower than the Rust sequential version, which means compared to an execution on a single CPU core the speed has doubled.
Debugging
Macros
PyO3's attributes (#[pyclass]
, #[pymodule]
, etc.) are procedural macros, which means that they rewrite the source of the annotated item. You can view the generated source with the following command, which also expands a few other things:
cargo rustc --profile=check -- -Z unstable-options --pretty=expanded > expanded.rs; rustfmt expanded.rs
(You might need to install rustfmt if you don't already have it.)
You can also debug classic !
-macros by adding -Z trace-macros
:
cargo rustc --profile=check -- -Z unstable-options --pretty=expanded -Z trace-macros > expanded.rs; rustfmt expanded.rs
See cargo expand for a more elaborate version of those commands.
Running with Valgrind
Valgrind is a tool to detect memory management bugs such as memory leaks.
You first need to install a debug build of Python, otherwise Valgrind won't produce usable results. In Ubuntu there's e.g. a python3-dbg
package.
Activate an environment with the debug interpreter and recompile. If you're on Linux, use ldd
with the name of your binary and check that you're linking e.g. libpython3.7d.so.1.0
instead of libpython3.7.so.1.0
.
Download the suppressions file for cpython.
Run Valgrind with valgrind --suppressions=valgrind-python.supp ./my-command --with-options
Getting a stacktrace
The best start to investigate a crash such as an segmentation fault is a backtrace. You can set RUST_BACKTRACE=1
as an environment variable to get the stack trace on a panic!
. Alternatively you can use a debugger such as gdb
to explore the issue. Rust provides a wrapper, rust-gdb
, which has pretty-printers for inspecting Rust variables. Since PyO3 uses cdylib
for Python shared objects, it does not receive the pretty-print debug hooks in rust-gdb
(rust-lang/rust#96365). The mentioned issue contains a workaround for enabling pretty-printers in this case.
- Link against a debug build of python as described in the previous chapter
- Run
rust-gdb <my-binary>
- Set a breakpoint (
b
) onrust_panic
if you are investigating apanic!
- Enter
r
to run - After the crash occurred, enter
bt
orbt full
to print the stacktrace
Often it is helpful to run a small piece of Python code to exercise a section of Rust.
rust-gdb --args python -c "import my_package; my_package.sum_to_string(1, 2)"
Features reference
PyO3 provides a number of Cargo features to customise functionality. This chapter of the guide provides detail on each of them.
By default, only the macros
feature is enabled.
Features for extension module authors
extension-module
This feature is required when building a Python extension module using PyO3.
It tells PyO3's build script to skip linking against libpython.so
on Unix platforms, where this must not be done.
See the building and distribution section for further detail.
abi3
This feature is used when building Python extension modules to create wheels which are compatible with multiple Python versions.
It restricts PyO3's API to a subset of the full Python API which is guaranteed by PEP 384 to be forwards-compatible with future Python versions.
See the building and distribution section for further detail.
The abi3-pyXY
features
(abi3-py37
, abi3-py38
, abi3-py39
, and abi3-py310
)
These features are extensions of the abi3
feature to specify the exact minimum Python version which the multiple-version-wheel will support.
See the building and distribution section for further detail.
generate-import-lib
This experimental feature is used to generate import libraries for Python DLL for MinGW-w64 and MSVC (cross-)compile targets.
Enabling it allows to (cross-)compile extension modules to any Windows targets without having to install the Windows Python distribution files for the target.
See the building and distribution section for further detail.
Features for embedding Python in Rust
auto-initialize
This feature changes Python::with_gil
and Python::acquire_gil
to automatically initialize a Python interpreter (by calling prepare_freethreaded_python
) if needed.
If you do not enable this feature, you should call pyo3::prepare_freethreaded_python()
before attempting to call any other Python APIs.
Advanced Features
macros
This feature enables a dependency on the pyo3-macros
crate, which provides the procedural macros portion of PyO3's API:
#[pymodule]
#[pyfunction]
#[pyclass]
#[pymethods]
#[derive(FromPyObject)]
It also provides the py_run!
macro.
These macros require a number of dependencies which may not be needed by users who just need PyO3 for Python FFI. Disabling this feature enables faster builds for those users, as these dependencies will not be built if this feature is disabled.
This feature is enabled by default. To disable it, set
default-features = false
for thepyo3
entry in your Cargo.toml.
multiple-pymethods
This feature enables a dependency on inventory
, which enables each #[pyclass]
to have more than one #[pymethods]
block. This feature also requires a minimum Rust version of 1.62 due to limitations in the inventory
crate.
Most users should only need a single #[pymethods]
per #[pyclass]
. In addition, not all platforms (e.g. Wasm) are supported by inventory
. For this reason this feature is not enabled by default, meaning fewer dependencies and faster compilation for the majority of users.
See the #[pyclass]
implementation details for more information.
pyproto
This feature enables the #[pyproto]
macro, which is a deprecated alternative to #[pymethods]
for defining magic methods such as __eq__
.
nightly
The nightly
feature needs the nightly Rust compiler. This allows PyO3 to use the auto_traits and negative_impls features to fix the Python::allow_threads
function.
resolve-config
The resolve-config
feature of the pyo3-build-config
crate controls whether that crate's
build script automatically resolves a Python interpreter / build configuration. This feature is primarily useful when building PyO3
itself. By default this feature is not enabled, meaning you can freely use pyo3-build-config
as a standalone library to read or write PyO3 build configuration files or resolve metadata about a Python interpreter.
Optional Dependencies
These features enable conversions between Python types and types from other Rust crates, enabling easy access to the rest of the Rust ecosystem.
anyhow
Adds a dependency on anyhow. Enables a conversion from anyhow’s Error
type to PyErr
, for easy error handling.
chrono
Adds a dependency on chrono. Enables a conversion from chrono's types to python:
- Duration ->
PyDelta
- FixedOffset ->
PyDelta
- Utc ->
PyTzInfo
- NaiveDate ->
PyDate
- NaiveTime ->
PyTime
- DateTime ->
PyDateTime
eyre
Adds a dependency on eyre. Enables a conversion from eyre’s Report
type to PyErr
, for easy error handling.
hashbrown
Adds a dependency on hashbrown and enables conversions into its HashMap
and HashSet
types.
indexmap
Adds a dependency on indexmap and enables conversions into its IndexMap
type.
num-bigint
Adds a dependency on num-bigint and enables conversions into its BigInt
and BigUint
types.
num-complex
Adds a dependency on num-complex and enables conversions into its Complex
type.
serde
Enables (de)serialization of Py#[derive(Serialize, Deserialize)
on structs that hold references to #[pyclass]
instances
#![allow(unused)] fn main() { #[cfg(feature = "serde")] #[allow(dead_code)] mod serde_only { use pyo3::prelude::*; use serde::{Deserialize, Serialize}; #[pyclass] #[derive(Serialize, Deserialize)] struct Permission { name: String } #[pyclass] #[derive(Serialize, Deserialize)] struct User { username: String, permissions: Vec<Py<Permission>> } } }
Memory management
Rust and Python have very different notions of memory management. Rust has a strict memory model with concepts of ownership, borrowing, and lifetimes, where memory is freed at predictable points in program execution. Python has a looser memory model in which variables are reference-counted with shared, mutable state by default. A global interpreter lock (GIL) is needed to prevent race conditions, and a garbage collector is needed to break reference cycles. Memory in Python is freed eventually by the garbage collector, but not usually in a predictable way.
PyO3 bridges the Rust and Python memory models with two different strategies for
accessing memory allocated on Python's heap from inside Rust. These are
GIL-bound, or "owned" references, and GIL-independent Py<Any>
smart pointers.
GIL-bound memory
PyO3's GIL-bound, "owned references" (&PyAny
etc.) make PyO3 more ergonomic to
use by ensuring that their lifetime can never be longer than the duration the
Python GIL is held. This means that most of PyO3's API can assume the GIL is
held. (If PyO3 could not assume this, every PyO3 API would need to take a
Python
GIL token to prove that the GIL is held.) This allows us to write
very simple and easy-to-understand programs like this:
use pyo3::prelude::*; use pyo3::types::PyString; fn main() -> PyResult<()> { Python::with_gil(|py| -> PyResult<()> { let hello: &PyString = py.eval("\"Hello World!\"", None, None)?.extract()?; println!("Python says: {}", hello); Ok(()) })?; Ok(()) }
Internally, calling Python::with_gil()
or Python::acquire_gil()
creates a
GILPool
which owns the memory pointed to by the reference. In the example
above, the lifetime of the reference hello
is bound to the GILPool
. When
the with_gil()
closure ends or the GILGuard
from acquire_gil()
is dropped,
the GILPool
is also dropped and the Python reference counts of the variables
it owns are decreased, releasing them to the Python garbage collector. Most
of the time we don't have to think about this, but consider the following:
use pyo3::prelude::*; use pyo3::types::PyString; fn main() -> PyResult<()> { Python::with_gil(|py| -> PyResult<()> { for _ in 0..10 { let hello: &PyString = py.eval("\"Hello World!\"", None, None)?.extract()?; println!("Python says: {}", hello); } // There are 10 copies of `hello` on Python's heap here. Ok(()) })?; Ok(()) }
We might assume that the hello
variable's memory is freed at the end of each
loop iteration, but in fact we create 10 copies of hello
on Python's heap.
This may seem surprising at first, but it is completely consistent with Rust's
memory model. The hello
variable is dropped at the end of each loop, but it
is only a reference to the memory owned by the GILPool
, and its lifetime is
bound to the GILPool
, not the for loop. The GILPool
isn't dropped until
the end of the with_gil()
closure, at which point the 10 copies of hello
are finally released to the Python garbage collector.
In general we don't want unbounded memory growth during loops! One workaround is to acquire and release the GIL with each iteration of the loop.
use pyo3::prelude::*; use pyo3::types::PyString; fn main() -> PyResult<()> { for _ in 0..10 { Python::with_gil(|py| -> PyResult<()> { let hello: &PyString = py.eval("\"Hello World!\"", None, None)?.extract()?; println!("Python says: {}", hello); Ok(()) })?; // only one copy of `hello` at a time } Ok(()) }
It might not be practical or performant to acquire and release the GIL so many
times. Another workaround is to work with the GILPool
object directly, but
this is unsafe.
use pyo3::prelude::*; use pyo3::types::PyString; fn main() -> PyResult<()> { Python::with_gil(|py| -> PyResult<()> { for _ in 0..10 { let pool = unsafe { py.new_pool() }; let py = pool.python(); let hello: &PyString = py.eval("\"Hello World!\"", None, None)?.extract()?; println!("Python says: {}", hello); } Ok(()) })?; Ok(()) }
The unsafe method Python::new_pool
allows you to create a nested GILPool
from which you can retrieve a new py: Python
GIL token. Variables created
with this new GIL token are bound to the nested GILPool
and will be released
when the nested GILPool
is dropped. Here, the nested GILPool
is dropped
at the end of each loop iteration, before the with_gil()
closure ends.
When doing this, you must be very careful to ensure that once the GILPool
is
dropped you do not retain access to any owned references created after the
GILPool
was created. Read the
documentation for Python::new_pool()
for more information on safety.
GIL-independent memory
Sometimes we need a reference to memory on Python's heap that can outlive the
GIL. Python's Py<PyAny>
is analogous to Rc<T>
, but for variables whose
memory is allocated on Python's heap. Cloning a Py<PyAny>
increases its
internal reference count just like cloning Rc<T>
. The smart pointer can
outlive the GIL from which it was created. It isn't magic, though. We need to
reacquire the GIL to access the memory pointed to by the Py<PyAny>
.
What happens to the memory when the last Py<PyAny>
is dropped and its
reference count reaches zero? It depends whether or not we are holding the GIL.
use pyo3::prelude::*; use pyo3::types::PyString; fn main() -> PyResult<()> { Python::with_gil(|py| -> PyResult<()> { let hello: Py<PyString> = py.eval("\"Hello World!\"", None, None)?.extract()?; println!("Python says: {}", hello.as_ref(py)); Ok(()) })?; Ok(()) }
At the end of the Python::with_gil()
closure hello
is dropped, and then the
GIL is dropped. Since hello
is dropped while the GIL is still held by the
current thread, its memory is released to the Python garbage collector
immediately.
This example wasn't very interesting. We could have just used a GIL-bound
&PyString
reference. What happens when the last Py<Any>
is dropped while
we are not holding the GIL?
use pyo3::prelude::*; use pyo3::types::PyString; fn main() -> PyResult<()> { let hello: Py<PyString> = Python::with_gil(|py| { py.eval("\"Hello World!\"", None, None)?.extract() })?; // Do some stuff... // Now sometime later in the program we want to access `hello`. Python::with_gil(|py| { println!("Python says: {}", hello.as_ref(py)); }); // Now we're done with `hello`. drop(hello); // Memory *not* released here. // Sometime later we need the GIL again for something... Python::with_gil(|py| // Memory for `hello` is released here. () ); Ok(()) }
When hello
is dropped nothing happens to the pointed-to memory on Python's
heap because nothing can happen if we're not holding the GIL. Fortunately,
the memory isn't leaked. PyO3 keeps track of the memory internally and will
release it the next time we acquire the GIL.
We can avoid the delay in releasing memory if we are careful to drop the
Py<Any>
while the GIL is held.
use pyo3::prelude::*; use pyo3::types::PyString; fn main() -> PyResult<()> { let hello: Py<PyString> = Python::with_gil(|py| { py.eval("\"Hello World!\"", None, None)?.extract() })?; // Do some stuff... // Now sometime later in the program: Python::with_gil(|py| { println!("Python says: {}", hello.as_ref(py)); drop(hello); // Memory released here. }); Ok(()) }
We could also have used Py::into_ref()
, which consumes self
, instead of
Py::as_ref()
. But note that in addition to being slower than as_ref()
,
into_ref()
binds the memory to the lifetime of the GILPool
, which means
that rather than being released immediately, the memory will not be released
until the GIL is dropped.
use pyo3::prelude::*; use pyo3::types::PyString; fn main() -> PyResult<()> { let hello: Py<PyString> = Python::with_gil(|py| { py.eval("\"Hello World!\"", None, None)?.extract() })?; // Do some stuff... // Now sometime later in the program: Python::with_gil(|py| { println!("Python says: {}", hello.into_ref(py)); // Memory not released yet. // Do more stuff... // Memory released here at end of `with_gil()` closure. }); Ok(()) }
Advanced topics
FFI
PyO3 exposes much of Python's C API through the ffi
module.
The C API is naturally unsafe and requires you to manage reference counts, errors and specific invariants yourself. Please refer to the C API Reference Manual and The Rustonomicon before using any function from that API.
Memory management
PyO3's &PyAny
"owned references" and Py<PyAny>
smart pointers are used to
access memory stored in Python's heap. This memory sometimes lives for longer
than expected because of differences in Rust and Python's memory models. See
the chapter on memory management for more information.
Building and distribution
This chapter of the guide goes into detail on how to build and distribute projects using PyO3. The way to achieve this is very different depending on whether the project is a Python module implemented in Rust, or a Rust binary embedding Python. For both types of project there are also common problems such as the Python version to build for and the linker arguments to use.
The material in this chapter is intended for users who have already read the PyO3 README. It covers in turn the choices that can be made for Python modules and for Rust binaries. There is also a section at the end about cross-compiling projects using PyO3.
There is an additional sub-chapter dedicated to supporting multiple Python versions.
Configuring the Python version
PyO3 uses a build script (backed by the pyo3-build-config
crate) to determine the Python version and set the correct linker arguments. By default it will attempt to use the following in order:
- Any active Python virtualenv.
- The
python
executable (if it's a Python 3 interpreter). - The
python3
executable.
You can override the Python interpreter by setting the PYO3_PYTHON
environment variable, e.g. PYO3_PYTHON=python3.7
, PYO3_PYTHON=/usr/bin/python3.9
, or even a PyPy interpreter PYO3_PYTHON=pypy3
.
Once the Python interpreter is located, pyo3-build-config
executes it to query the information in the sysconfig
module which is needed to configure the rest of the compilation.
To validate the configuration which PyO3 will use, you can run a compilation with the environment variable PYO3_PRINT_CONFIG=1
set. An example output of doing this is shown below:
$ PYO3_PRINT_CONFIG=1 cargo build
Compiling pyo3 v0.14.1 (/home/david/dev/pyo3)
error: failed to run custom build command for `pyo3 v0.14.1 (/home/david/dev/pyo3)`
Caused by:
process didn't exit successfully: `/home/david/dev/pyo3/target/debug/build/pyo3-7a8cf4fe22e959b7/build-script-build` (exit status: 101)
--- stdout
cargo:rerun-if-env-changed=PYO3_CROSS
cargo:rerun-if-env-changed=PYO3_CROSS_LIB_DIR
cargo:rerun-if-env-changed=PYO3_CROSS_PYTHON_VERSION
cargo:rerun-if-env-changed=PYO3_PRINT_CONFIG
-- PYO3_PRINT_CONFIG=1 is set, printing configuration and halting compile --
implementation=CPython
version=3.8
shared=true
abi3=false
lib_name=python3.8
lib_dir=/usr/lib
executable=/usr/bin/python
pointer_width=64
build_flags=
suppress_build_script_link_lines=false
Advanced: config files
If you save the above output config from PYO3_PRINT_CONFIG
to a file, it is possible to manually override the contents and feed it back into PyO3 using the PYO3_CONFIG_FILE
env var.
If your build environment is unusual enough that PyO3's regular configuration detection doesn't work, using a config file like this will give you the flexibility to make PyO3 work for you. To see the full set of options supported, see the documentation for the InterpreterConfig
struct.
Building Python extension modules
Python extension modules need to be compiled differently depending on the OS (and architecture) that they are being compiled for. As well as multiple OSes (and architectures), there are also many different Python versions which are actively supported. Packages uploaded to PyPI usually want to upload prebuilt "wheels" covering many OS/arch/version combinations so that users on all these different platforms don't have to compile the package themselves. Package vendors can opt-in to the "abi3" limited Python API which allows their wheels to be used on multiple Python versions, reducing the number of wheels they need to compile, but restricts the functionality they can use.
There are many ways to go about this: it is possible to use cargo
to build the extension module (along with some manual work, which varies with OS). The PyO3 ecosystem has two packaging tools, maturin
and setuptools-rust
, which abstract over the OS difference and also support building wheels for PyPI upload.
PyO3 has some Cargo features to configure projects for building Python extension modules:
- The
extension-module
feature, which must be enabled when building Python extension modules. - The
abi3
feature and its version-specificabi3-pyXY
companions, which are used to opt-in to the limited Python API in order to support multiple Python versions in a single wheel.
This section describes each of these packaging tools before describiing how to build manually without them. It then proceeds with an explanation of the extension-module
feature. Finally, there is a section describing PyO3's abi3
features.
Packaging tools
The PyO3 ecosystem has two main choices to abstract the process of developing Python extension modules:
maturin
is a command-line tool to build, package and upload Python modules. It makes opinionated choices about project layout meaning it needs very little configuration. This makes it a great choice for users who are building a Python extension from scratch and don't need flexibility.setuptools-rust
is an add-on forsetuptools
which adds extra keyword arguments to thesetup.py
configuration file. It requires more configuration thanmaturin
, however this gives additional flexibility for users adding Rust to an existing Python package that can't satisfymaturin
's constraints.
Consult each project's documentation for full details on how to get started using them and how to upload wheels to PyPI.
There are also maturin-starter
and setuptools-rust-starter
examples in the PyO3 repository.
Manual builds
To build a PyO3-based Python extension manually, start by running cargo build
as normal in a library project which uses PyO3's extension-module
feature and has the cdylib
crate type.
Once built, symlink (or copy) and rename the shared library from Cargo's target/
directory to your desired output directory:
- on macOS, rename
libyour_module.dylib
toyour_module.so
. - on Windows, rename
libyour_module.dll
toyour_module.pyd
. - on Linux, rename
libyour_module.so
toyour_module.so
.
You can then open a Python shell in the output directory and you'll be able to run import your_module
.
If you're packaging your library for redistribution, you should indicated the Python interpreter your library is compiled for by including the platform tag in its name. This prevents incompatible interpreters from trying to import your library. If you're compiling for PyPy you must include the platform tag, or PyPy will ignore the module.
See, as an example, Bazel rules to build PyO3 on Linux at https://github.com/TheButlah/rules_pyo3.
Platform tags
Rather than using just the .so
or .pyd
extension suggested above (depending on OS), uou can prefix the shared library extension with a platform tag to indicate the interpreter it is compatible with. You can query your interpreter's platform tag from the sysconfig
module. Some example outputs of this are seen below:
# CPython 3.10 on macOS
.cpython-310-darwin.so
# PyPy 7.3 (Python 3.8) on Linux
$ python -c 'import sysconfig; print(sysconfig.get_config_var("EXT_SUFFIX"))'
.pypy38-pp73-x86_64-linux-gnu.so
So, for example, a valid module library name on CPython 3.10 for macOS is your_module.cpython-310-darwin.so
, and its equivalent when compiled for PyPy 7.3 on Linux would be your_module.pypy38-pp73-x86_64-linux-gnu.so
.
See PEP 3149 for more background on platform tags.
macOS
On macOS, because the extension-module
feature disables linking to libpython
(see the next section), some additional linker arguments need to be set. maturin
and setuptools-rust
both pass these arguments for PyO3 automatically, but projects using manual builds will need to set these directly in order to support macOS.
The easiest way to set the correct linker arguments is to add a build.rs
with the following content:
fn main() {
pyo3_build_config::add_extension_module_link_args();
}
Remember to also add pyo3-build-config
to the build-dependencies
section in Cargo.toml
.
An alternative to using pyo3-build-config
is add the following to a cargo configuration file (e.g. .cargo/config.toml
):
[target.x86_64-apple-darwin]
rustflags = [
"-C", "link-arg=-undefined",
"-C", "link-arg=dynamic_lookup",
]
[target.aarch64-apple-darwin]
rustflags = [
"-C", "link-arg=-undefined",
"-C", "link-arg=dynamic_lookup",
]
Using the MacOS system python3 (/usr/bin/python3
, as opposed to python installed via homebrew, pyenv, nix, etc.) may result in runtime errors such as Library not loaded: @rpath/Python3.framework/Versions/3.8/Python3
. These can be resolved with another addition to .cargo/config.toml
:
[build]
rustflags = [
"-C", "link-args=-Wl,-rpath,/Library/Developer/CommandLineTools/Library/Frameworks",
]
Alternatively, on rust >= 1.56, one can include in build.rs
:
fn main() { println!( "cargo:rustc-link-arg=-Wl,-rpath,/Library/Developer/CommandLineTools/Library/Frameworks" ); }
For more discussion on and workarounds for MacOS linking problems see this issue.
Finally, don't forget that on MacOS the extension-module
feature will cause cargo test
to fail without the --no-default-features
flag (see the FAQ).
The extension-module
feature
PyO3's extension-module
feature is used to disable linking to libpython
on unix targets.
This is necessary because by default PyO3 links to libpython
. This makes binaries, tests, and examples "just work". However, Python extensions on unix must not link to libpython for manylinux compliance.
The downside of not linking to libpython
is that binaries, tests, and examples (which usually embed Python) will fail to build. If you have an extension module as well as other outputs in a single project, you need to use optional Cargo features to disable the extension-module
when you're not building the extension module. See the FAQ for an example workaround.
Py_LIMITED_API
/abi3
By default, Python extension modules can only be used with the same Python version they were compiled against. For example, an extension module built for Python 3.5 can't be imported in Python 3.8. PEP 384 introduced the idea of the limited Python API, which would have a stable ABI enabling extension modules built with it to be used against multiple Python versions. This is also known as abi3
.
The advantage of building extension modules using the limited Python API is that package vendors only need to build and distribute a single copy (for each OS / architecture), and users can install it on all Python versions from the minimum version and up. The downside of this is that PyO3 can't use optimizations which rely on being compiled against a known exact Python version. It's up to you to decide whether this matters for your extension module. It's also possible to design your extension module such that you can distribute abi3
wheels but allow users compiling from source to benefit from additional optimizations - see the support for multiple python versions section of this guide, in particular the #[cfg(Py_LIMITED_API)]
flag.
There are three steps involved in making use of abi3
when building Python packages as wheels:
- Enable the
abi3
feature inpyo3
. This ensurespyo3
only calls Python C-API functions which are part of the stable API, and on Windows also ensures that the project links against the correct shared object (no special behavior is required on other platforms):
[dependencies]
pyo3 = { version = "0.17.3", features = ["abi3"] }
-
Ensure that the built shared objects are correctly marked as
abi3
. This is accomplished by telling your build system that you're using the limited API.maturin
>= 0.9.0 andsetuptools-rust
>= 0.11.4 supportabi3
wheels. See the corresponding PRs for more. -
Ensure that the
.whl
is correctly marked asabi3
. For projects usingsetuptools
, this is accomplished by passing--py-limited-api=cp3x
(wherex
is the minimum Python version supported by the wheel, e.g.--py-limited-api=cp35
for Python 3.5) tosetup.py bdist_wheel
.
Minimum Python version for abi3
Because a single abi3
wheel can be used with many different Python versions, PyO3 has feature flags abi3-py37
, abi3-py38
, abi3-py39
etc. to set the minimum required Python version for your abi3
wheel.
For example, if you set the abi3-py37
feature, your extension wheel can be used on all Python 3 versions from Python 3.7 and up. maturin
and setuptools-rust
will give the wheel a name like my-extension-1.0-cp37-abi3-manylinux2020_x86_64.whl
.
As your extension module may be run with multiple different Python versions you may occasionally find you need to check the Python version at runtime to customize behavior. See the relevant section of this guide on supporting multiple Python versions at runtime.
PyO3 is only able to link your extension module to api3 version up to and including your host Python version. E.g., if you set abi3-py38
and try to compile the crate with a host of Python 3.7, the build will fail.
Note: If you set more that one of these api version feature flags the lowest version always wins. For example, with both
abi3-py37
andabi3-py38
set, PyO3 would build a wheel which supports Python 3.7 and up.
Building abi3
extensions without a Python interpreter
As an advanced feature, you can build PyO3 wheel without calling Python interpreter with the environment variable PYO3_NO_PYTHON
set.
Also, if the build host Python interpreter is not found or is too old or otherwise unusable,
PyO3 will still attempt to compile abi3
extension modules after displaying a warning message.
On Unix-like systems this works unconditionally; on Windows you must also set the RUSTFLAGS
environment variable
to contain -L native=/path/to/python/libs
so that the linker can find python3.lib
.
If the python3.dll
import library is not available, an experimental generate-import-lib
crate
feature may be enabled, and the required library will be created and used by PyO3 automatically.
Note: MSVC targets require LLVM binutils (llvm-dlltool
) to be available in PATH
for
the automatic import library generation feature to work.
Missing features
Due to limitations in the Python API, there are a few pyo3
features that do
not work when compiling for abi3
. These are:
#[pyo3(text_signature = "...")]
does not work on classes until Python 3.10 or greater.- The
dict
andweakref
options on classes are not supported until Python 3.9 or greater. - The buffer API is not supported until Python 3.11 or greater.
- Optimizations which rely on knowledge of the exact Python version compiled against.
Embedding Python in Rust
If you want to embed the Python interpreter inside a Rust program, there are two modes in which this can be done: dynamically and statically. We'll cover each of these modes in the following sections. Each of them affect how you must distribute your program. Instead of learning how to do this yourself, you might want to consider using a project like PyOxidizer to ship your application and all of its dependencies in a single file.
PyO3 automatically switches between the two linking modes depending on whether the Python distribution you have configured PyO3 to use (see above) contains a shared library or a static library. The static library is most often seen in Python distributions compiled from source without the --enable-shared
configuration option. For example, this is the default for pyenv
on macOS.
Dynamically embedding the Python interpreter
Embedding the Python interpreter dynamically is much easier than doing so statically. This is done by linking your program against a Python shared library (such as libpython.3.9.so
on UNIX, or python39.dll
on Windows). The implementation of the Python interpreter resides inside the shared library. This means that when the OS runs your Rust program it also needs to be able to find the Python shared library.
This mode of embedding works well for Rust tests which need access to the Python interpreter. It is also great for Rust software which is installed inside a Python virtualenv, because the virtualenv sets up appropriate environment variables to locate the correct Python shared library.
For distributing your program to non-technical users, you will have to consider including the Python shared library in your distribution as well as setting up wrapper scripts to set the right environment variables (such as LD_LIBRARY_PATH
on UNIX, or PATH
on Windows).
Note that PyPy cannot be embedded in Rust (or any other software). Support for this is tracked on the PyPy issue tracker.
Statically embedding the Python interpreter
Embedding the Python interpreter statically means including the contents of a Python static library directly inside your Rust binary. This means that to distribute your program you only need to ship your binary file: it contains the Python interpreter inside the binary!
On Windows static linking is almost never done, so Python distributions don't usually include a static library. The information below applies only to UNIX.
The Python static library is usually called libpython.a
.
Static linking has a lot of complications, listed below. For these reasons PyO3 does not yet have first-class support for this embedding mode. See issue 416 on PyO3's Github for more information and to discuss any issues you encounter.
The auto-initialize
feature is deliberately disabled when embedding the interpreter statically because this is often unintentionally done by new users to PyO3 running test programs. Trying out PyO3 is much easier using dynamic embedding.
The known complications are:
-
To import compiled extension modules (such as other Rust extension modules, or those written in C), your binary must have the correct linker flags set during compilation to export the original contents of
libpython.a
so that extensions can use them (e.g.-Wl,--export-dynamic
). -
The C compiler and flags which were used to create
libpython.a
must be compatible with your Rust compiler and flags, else you will experience compilation failures.Significantly different compiler versions may see errors like this:
lto1: fatal error: bytecode stream in file 'rust-numpy/target/release/deps/libpyo3-6a7fb2ed970dbf26.rlib' generated with LTO version 6.0 instead of the expected 6.2
Mismatching flags may lead to errors like this:
/usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libpython3.9.a(zlibmodule.o): relocation R_X86_64_32 against `.data' can not be used when making a PIE object; recompile with -fPIE
If you encounter these or other complications when linking the interpreter statically, discuss them on issue 416 on PyO3's Github. It is hoped that eventually that discussion will contain enough information and solutions that PyO3 can offer first-class support for static embedding.
Import your module when embedding the Python interpreter
When you run your Rust binary with an embedded interpreter, any #[pymodule]
created modules won't be accessible to import unless added to a table called PyImport_Inittab
before the embedded interpreter is initialized. This will cause Python statements in your embedded interpreter such as import your_new_module
to fail. You can call the macro append_to_inittab
with your module before initializing the Python interpreter to add the module function into that table. (The Python interpreter will be initialized by calling prepare_freethreaded_python
, with_embedded_interpreter
, or Python::with_gil
with the auto-initialize
feature enabled.)
Cross Compiling
Thanks to Rust's great cross-compilation support, cross-compiling using PyO3 is relatively straightforward. To get started, you'll need a few pieces of software:
- A toolchain for your target.
- The appropriate options in your Cargo
.config
for the platform you're targeting and the toolchain you are using. - A Python interpreter that's already been compiled for your target (optional when building "abi3" extension modules).
- A Python interpreter that is built for your host and available through the
PATH
or setting thePYO3_PYTHON
variable (optional when building "abi3" extension modules).
After you've obtained the above, you can build a cross-compiled PyO3 module by using Cargo's --target
flag. PyO3's build script will detect that you are attempting a cross-compile based on your host machine and the desired target.
When cross-compiling, PyO3's build script cannot execute the target Python interpreter to query the configuration, so there are a few additional environment variables you may need to set:
PYO3_CROSS
: If present this variable forces PyO3 to configure as a cross-compilation.PYO3_CROSS_LIB_DIR
: This variable can be set to the directory containing the target's libpython DSO and the associated_sysconfigdata*.py
file for Unix-like targets, or the Python DLL import libraries for the Windows target. This variable is only needed when the output binary must link to libpython explicitly (e.g. when targeting Windows and Android or embedding a Python interpreter), or when it is absolutely required to get the interpreter configuration from_sysconfigdata*.py
.PYO3_CROSS_PYTHON_VERSION
: Major and minor version (e.g. 3.9) of the target Python installation. This variable is only needed if PyO3 cannot determine the version to target fromabi3-py3*
features, or ifPYO3_CROSS_LIB_DIR
is not set, or if there are multiple versions of Python present inPYO3_CROSS_LIB_DIR
.PYO3_CROSS_PYTHON_IMPLEMENTATION
: Python implementation name ("CPython" or "PyPy") of the target Python installation. CPython is assumed by default when this variable is not set, unlessPYO3_CROSS_LIB_DIR
is set for a Unix-like target and PyO3 can get the interpreter configuration from_sysconfigdata*.py
.
An experimental pyo3
crate feature generate-import-lib
enables the user to cross-compile
extension modules for Windows targets without setting the PYO3_CROSS_LIB_DIR
environment
variable or providing any Windows Python library files. It uses an external python3-dll-a
crate
to generate import libraries for the Python DLL for MinGW-w64 and MSVC compile targets.
python3-dll-a
uses the binutils dlltool
program to generate DLL import libraries for MinGW-w64 targets.
It is possible to override the default dlltool
command name for the cross target
by setting PYO3_MINGW_DLLTOOL
environment variable.
Note: MSVC targets require LLVM binutils or MSVC build tools to be available on the host system.
More specifically, python3-dll-a
requires llvm-dlltool
or lib.exe
executable to be present in PATH
when
targeting *-pc-windows-msvc
. Zig compiler executable can be used in place of llvm-dlltool
when ZIG_COMMAND
environment variable is set to the installed Zig program name ("zig"
or "python -m ziglang"
).
An example might look like the following (assuming your target's sysroot is at /home/pyo3/cross/sysroot
and that your target is armv7
):
export PYO3_CROSS_LIB_DIR="/home/pyo3/cross/sysroot/usr/lib"
cargo build --target armv7-unknown-linux-gnueabihf
If there are multiple python versions at the cross lib directory and you cannot set a more precise location to include both the libpython
DSO and _sysconfigdata*.py
files, you can set the required version:
export PYO3_CROSS_PYTHON_VERSION=3.8
export PYO3_CROSS_LIB_DIR="/home/pyo3/cross/sysroot/usr/lib"
cargo build --target armv7-unknown-linux-gnueabihf
Or another example with the same sys root but building for Windows:
export PYO3_CROSS_PYTHON_VERSION=3.9
export PYO3_CROSS_LIB_DIR="/home/pyo3/cross/sysroot/usr/lib"
cargo build --target x86_64-pc-windows-gnu
Any of the abi3-py3*
features can be enabled instead of setting PYO3_CROSS_PYTHON_VERSION
in the above examples.
PYO3_CROSS_LIB_DIR
can often be omitted when cross compiling extension modules for Unix and macOS targets,
or when cross compiling extension modules for Windows and the experimental generate-import-lib
crate feature is enabled.
The following resources may also be useful for cross-compiling:
- github.com/japaric/rust-cross is a primer on cross compiling Rust.
- github.com/rust-embedded/cross uses Docker to make Rust cross-compilation easier.
Supporting multiple Python versions
PyO3 supports all actively-supported Python 3 and PyPy versions. As much as possible, this is done internally to PyO3 so that your crate's code does not need to adapt to the differences between each version. However, as Python features grow and change between versions, PyO3 cannot a completely identical API for every Python version. This may require you to add conditional compilation to your crate or runtime checks for the Python version.
This section of the guide first introduces the pyo3-build-config
crate, which you can use as a build-dependency
to add additional #[cfg]
flags which allow you to support multiple Python versions at compile-time.
Second, we'll show how to check the Python version at runtime. This can be useful when building for multiple versions with the abi3
feature, where the Python API compiled against is not always the same as the one in use.
Conditional compilation for different Python versions
The pyo3-build-config
exposes multiple #[cfg]
flags which can be used to conditionally compile code for a given Python version. PyO3 itself depends on this crate, so by using it you can be sure that you are configured correctly for the Python version PyO3 is building against.
This allows us to write code like the following
#[cfg(Py_3_7)]
fn function_only_supported_on_python_3_7_and_up() { }
#[cfg(not(Py_3_8))]
fn function_only_supported_before_python_3_8() { }
#[cfg(not(Py_LIMITED_API))]
fn function_incompatible_with_abi3_feature() { }
The following sections first show how to add these #[cfg]
flags to your build process, and then cover some common patterns flags in a little more detail.
To see a full reference of all the #[cfg]
flags provided, see the pyo3-build-cfg
docs.
Using pyo3-build-config
You can use the #[cfg]
flags in just two steps:
-
Add
pyo3-build-config
with theresolve-config
feature enabled to your crate's build dependencies inCargo.toml
:[build-dependencies] pyo3-build-config = { version = "0.17.3", features = ["resolve-config"] }
-
Add a
build.rs
file to your crate with the following contents:fn main() { // If you have an existing build.rs file, just add this line to it. pyo3_build_config::use_pyo3_cfgs(); }
After these steps you are ready to annotate your code!
Common usages of pyo3-build-cfg
flags
The #[cfg]
flags added by pyo3-build-cfg
can be combined with all of Rust's logic in the #[cfg]
attribute to create very precise conditional code generation. The following are some common patterns implemented using these flags:
#[cfg(Py_3_7)]
This #[cfg]
marks code that will only be present on Python 3.7 and upwards. There are similar options Py_3_8
, Py_3_9
, Py_3_10
and so on for each minor version.
#[cfg(not(Py_3_7))]
This #[cfg]
marks code that will only be present on Python versions before (but not including) Python 3.7.
#[cfg(not(Py_LIMITED_API))]
This #[cfg]
marks code that is only available when building for the unlimited Python API (i.e. PyO3's abi3
feature is not enabled). This might be useful if you want to ship your extension module as an abi3
wheel and also allow users to compile it from source to make use of optimizations only possible with the unlimited API.
#[cfg(any(Py_3_9, not(Py_LIMITED_API)))]
This #[cfg]
marks code which is available when running Python 3.9 or newer, or when using the unlimited API with an older Python version. Patterns like this are commonly seen on Python APIs which were added to the limited Python API in a specific minor version.
#[cfg(PyPy)]
This #[cfg]
marks code which is running on PyPy.
Checking the Python version at runtime
When building with PyO3's abi3
feature, your extension module will be compiled against a specific minimum version of Python, but may be running on newer Python versions.
For example with PyO3's abi3-py38
feature, your extension will be compiled as if it were for Python 3.8. If you were using pyo3-build-config
, #[cfg(Py_3_8)]
would be present. Your user could freely install and run your abi3 extension on Python 3.9.
There's no way to detect your user doing that at compile time, so instead you need to fall back to runtime checks.
PyO3 provides the APIs Python::version()
and Python::version_info()
to query the running Python version. This allows you to do the following, for example:
#![allow(unused)] fn main() { use pyo3::Python; Python::with_gil(|py| { // PyO3 supports Python 3.7 and up. assert!(py.version_info() >= (3, 7)); assert!(py.version_info() >= (3, 7, 0)); }); }
The PyO3 ecosystem
This portion of the guide is dedicated to crates which are external to the main PyO3 project and provide additional functionality you might find useful.
Because these projects evolve independently of the PyO3 repository the content of these articles may fall out of date over time; please file issues on the PyO3 Github to alert maintainers when this is the case.
Logging
It is desirable if both the Python and Rust parts of the application end up logging using the same configuration into the same place.
This section of the guide briefly discusses how to connect the two languages'
logging ecosystems together. The recommended way for Python extension modules is
to configure Rust's logger to send log messages to Python using the pyo3-log
crate. For users who want to do the opposite and send Python log messages to
Rust, see the note at the end of this guide.
Using pyo3-log
to send Rust log messages to Python
The pyo3-log crate allows sending the messages from the Rust side to Python's logging system. This is mostly suitable for writing native extensions for Python programs.
Use pyo3_log::init
to install the logger in its default configuration.
It's also possible to tweak its configuration (mostly to tune its performance).
#![allow(unused)] fn main() { use log::info; use pyo3::prelude::*; #[pyfunction] fn log_something() { // This will use the logger installed in `my_module` to send the `info` // message to the Python logging facilities. info!("Something!"); } #[pymodule] fn my_module(_py: Python<'_>, m: &PyModule) -> PyResult<()> { // A good place to install the Rust -> Python logger. pyo3_log::init(); m.add_function(wrap_pyfunction!(log_something))?; Ok(()) } }
Then it is up to the Python side to actually output the messages somewhere.
import logging
import my_module
FORMAT = '%(levelname)s %(name)s %(asctime)-15s %(filename)s:%(lineno)d %(message)s'
logging.basicConfig(format=FORMAT)
logging.getLogger().setLevel(logging.INFO)
my_module.log_something()
It is important to initialize the Python loggers first, before calling any Rust functions that may log. This limitation can be worked around if it is not possible to satisfy, read the documentation about caching.
The Python to Rust direction
To best of our knowledge nobody implemented the reverse direction yet, though it
should be possible. If interested, the pyo3
community would be happy to
provide guidance.
Using async
and await
If you are working with a Python library that makes use of async functions or wish to provide
Python bindings for an async Rust library, pyo3-asyncio
likely has the tools you need. It provides conversions between async functions in both Python and
Rust and was designed with first-class support for popular Rust runtimes such as
tokio
and async-std
. In addition, all async Python
code runs on the default asyncio
event loop, so pyo3-asyncio
should work just fine with existing
Python libraries.
In the following sections, we'll give a general overview of pyo3-asyncio
explaining how to call
async Python functions with PyO3, how to call async Rust functions from Python, and how to configure
your codebase to manage the runtimes of both.
Quickstart
Here are some examples to get you started right away! A more detailed breakdown of the concepts in these examples can be found in the following sections.
Rust Applications
Here we initialize the runtime, import Python's asyncio
library and run the given future to completion using Python's default EventLoop
and async-std
. Inside the future, we convert asyncio
sleep into a Rust future and await it.
# Cargo.toml dependencies
[dependencies]
pyo3 = { version = "0.14" }
pyo3-asyncio = { version = "0.14", features = ["attributes", "async-std-runtime"] }
async-std = "1.9"
//! main.rs use pyo3::prelude::*; #[pyo3_asyncio::async_std::main] async fn main() -> PyResult<()> { let fut = Python::with_gil(|py| { let asyncio = py.import("asyncio")?; // convert asyncio.sleep into a Rust Future pyo3_asyncio::async_std::into_future(asyncio.call_method1("sleep", (1.into_py(py),))?) })?; fut.await?; Ok(()) }
The same application can be written to use tokio
instead using the #[pyo3_asyncio::tokio::main]
attribute.
# Cargo.toml dependencies
[dependencies]
pyo3 = { version = "0.14" }
pyo3-asyncio = { version = "0.14", features = ["attributes", "tokio-runtime"] }
tokio = "1.4"
//! main.rs use pyo3::prelude::*; #[pyo3_asyncio::tokio::main] async fn main() -> PyResult<()> { let fut = Python::with_gil(|py| { let asyncio = py.import("asyncio")?; // convert asyncio.sleep into a Rust Future pyo3_asyncio::tokio::into_future(asyncio.call_method1("sleep", (1.into_py(py),))?) })?; fut.await?; Ok(()) }
More details on the usage of this library can be found in the API docs and the primer below.
PyO3 Native Rust Modules
PyO3 Asyncio can also be used to write native modules with async functions.
Add the [lib]
section to Cargo.toml
to make your library a cdylib
that Python can import.
[lib]
name = "my_async_module"
crate-type = ["cdylib"]
Make your project depend on pyo3
with the extension-module
feature enabled and select your
pyo3-asyncio
runtime:
For async-std
:
[dependencies]
pyo3 = { version = "0.14", features = ["extension-module"] }
pyo3-asyncio = { version = "0.14", features = ["async-std-runtime"] }
async-std = "1.9"
For tokio
:
[dependencies]
pyo3 = { version = "0.14", features = ["extension-module"] }
pyo3-asyncio = { version = "0.14", features = ["tokio-runtime"] }
tokio = "1.4"
Export an async function that makes use of async-std
:
#![allow(unused)] fn main() { //! lib.rs use pyo3::{prelude::*, wrap_pyfunction}; #[pyfunction] fn rust_sleep(py: Python<'_>) -> PyResult<&PyAny> { pyo3_asyncio::async_std::future_into_py(py, async { async_std::task::sleep(std::time::Duration::from_secs(1)).await; Ok(Python::with_gil(|py| py.None())) }) } #[pymodule] fn my_async_module(py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(rust_sleep, m)?)?; Ok(()) } }
If you want to use tokio
instead, here's what your module should look like:
#![allow(unused)] fn main() { //! lib.rs use pyo3::{prelude::*, wrap_pyfunction}; #[pyfunction] fn rust_sleep(py: Python<'_>) -> PyResult<&PyAny> { pyo3_asyncio::tokio::future_into_py(py, async { tokio::time::sleep(std::time::Duration::from_secs(1)).await; Ok(Python::with_gil(|py| py.None())) }) } #[pymodule] fn my_async_module(py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(rust_sleep, m)?)?; Ok(()) } }
You can build your module with maturin (see the Using Rust in Python section in the PyO3 guide for setup instructions). After that you should be able to run the Python REPL to try it out.
maturin develop && python3
🔗 Found pyo3 bindings
🐍 Found CPython 3.8 at python3
Finished dev [unoptimized + debuginfo] target(s) in 0.04s
Python 3.8.5 (default, Jan 27 2021, 15:41:15)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import asyncio
>>>
>>> from my_async_module import rust_sleep
>>>
>>> async def main():
>>> await rust_sleep()
>>>
>>> # should sleep for 1s
>>> asyncio.run(main())
>>>
Awaiting an Async Python Function in Rust
Let's take a look at a dead simple async Python function:
# Sleep for 1 second
async def py_sleep():
await asyncio.sleep(1)
Async functions in Python are simply functions that return a coroutine
object. For our purposes,
we really don't need to know much about these coroutine
objects. The key factor here is that calling
an async
function is just like calling a regular function, the only difference is that we have
to do something special with the object that it returns.
Normally in Python, that something special is the await
keyword, but in order to await this
coroutine in Rust, we first need to convert it into Rust's version of a coroutine
: a Future
.
That's where pyo3-asyncio
comes in.
pyo3_asyncio::into_future
performs this conversion for us.
The following example uses into_future
to call the py_sleep
function shown above and then await the
coroutine object returned from the call:
use pyo3::prelude::*; #[pyo3_asyncio::tokio::main] async fn main() -> PyResult<()> { let future = Python::with_gil(|py| -> PyResult<_> { // import the module containing the py_sleep function let example = py.import("example")?; // calling the py_sleep method like a normal function // returns a coroutine let coroutine = example.call_method0("py_sleep")?; // convert the coroutine into a Rust future using the // tokio runtime pyo3_asyncio::tokio::into_future(coroutine) })?; // await the future future.await?; Ok(()) }
Alternatively, the below example shows how to write a #[pyfunction]
which uses into_future
to receive and await
a coroutine argument:
#![allow(unused)] fn main() { #[pyfunction] fn await_coro(coro: &PyAny) -> PyResult<()> { // convert the coroutine into a Rust future using the // async_std runtime let f = pyo3_asyncio::async_std::into_future(coro)?; pyo3_asyncio::async_std::run_until_complete(coro.py(), async move { // await the future f.await?; Ok(()) }) } }
This could be called from Python as:
import asyncio
async def py_sleep():
asyncio.sleep(1)
await_coro(py_sleep())
If for you wanted to pass a callable function to the #[pyfunction]
instead, (i.e. the last line becomes await_coro(py_sleep))
, then the above example needs to be tweaked to first call the callable to get the coroutine:
#![allow(unused)] fn main() { #[pyfunction] fn await_coro(callable: &PyAny) -> PyResult<()> { // get the coroutine by calling the callable let coro = callable.call0()?; // convert the coroutine into a Rust future using the // async_std runtime let f = pyo3_asyncio::async_std::into_future(coro)?; pyo3_asyncio::async_std::run_until_complete(coro.py(), async move { // await the future f.await?; Ok(()) }) } }
This can be particularly helpful where you need to repeatedly create and await a coroutine. Trying to await the same coroutine multiple times will raise an error:
RuntimeError: cannot reuse already awaited coroutine
If you're interested in learning more about
coroutines
andawaitables
in general, check out the Python 3asyncio
docs for more information.
Awaiting a Rust Future in Python
Here we have the same async function as before written in Rust using the
async-std
runtime:
#![allow(unused)] fn main() { /// Sleep for 1 second async fn rust_sleep() { async_std::task::sleep(std::time::Duration::from_secs(1)).await; } }
Similar to Python, Rust's async functions also return a special object called a
Future
:
#![allow(unused)] fn main() { let future = rust_sleep(); }
We can convert this Future
object into Python to make it awaitable
. This tells Python that you
can use the await
keyword with it. In order to do this, we'll call
pyo3_asyncio::async_std::future_into_py
:
#![allow(unused)] fn main() { use pyo3::prelude::*; async fn rust_sleep() { async_std::task::sleep(std::time::Duration::from_secs(1)).await; } #[pyfunction] fn call_rust_sleep(py: Python<'_>) -> PyResult<&PyAny> { pyo3_asyncio::async_std::future_into_py(py, async move { rust_sleep().await; Ok(Python::with_gil(|py| py.None())) }) } }
In Python, we can call this pyo3 function just like any other async function:
from example import call_rust_sleep
async def rust_sleep():
await call_rust_sleep()
Managing Event Loops
Python's event loop requires some special treatment, especially regarding the main thread. Some of
Python's asyncio
features, like proper signal handling, require control over the main thread, which
doesn't always play well with Rust.
Luckily, Rust's event loops are pretty flexible and don't need control over the main thread, so in
pyo3-asyncio
, we decided the best way to handle Rust/Python interop was to just surrender the main
thread to Python and run Rust's event loops in the background. Unfortunately, since most event loop
implementations prefer control over the main thread, this can still make some things awkward.
PyO3 Asyncio Initialization
Because Python needs to control the main thread, we can't use the convenient proc macros from Rust
runtimes to handle the main
function or #[test]
functions. Instead, the initialization for PyO3 has to be done from the main
function and the main
thread must block on pyo3_asyncio::run_forever
or pyo3_asyncio::async_std::run_until_complete
.
Because we have to block on one of those functions, we can't use #[async_std::main]
or #[tokio::main]
since it's not a good idea to make long blocking calls during an async function.
Internally, these
#[main]
proc macros are expanded to something like this:fn main() { // your async main fn async fn _main_impl() { /* ... */ } Runtime::new().block_on(_main_impl()); }
Making a long blocking call inside the
Future
that's being driven byblock_on
prevents that thread from doing anything else and can spell trouble for some runtimes (also this will actually deadlock a single-threaded runtime!). Many runtimes have some sort ofspawn_blocking
mechanism that can avoid this problem, but again that's not something we can use here since we need it to block on the main thread.
For this reason, pyo3-asyncio
provides its own set of proc macros to provide you with this
initialization. These macros are intended to mirror the initialization of async-std
and tokio
while also satisfying the Python runtime's needs.
Here's a full example of PyO3 initialization with the async-std
runtime:
use pyo3::prelude::*; #[pyo3_asyncio::async_std::main] async fn main() -> PyResult<()> { // PyO3 is initialized - Ready to go let fut = Python::with_gil(|py| -> PyResult<_> { let asyncio = py.import("asyncio")?; // convert asyncio.sleep into a Rust Future pyo3_asyncio::async_std::into_future( asyncio.call_method1("sleep", (1.into_py(py),))? ) })?; fut.await?; Ok(()) }
A Note About asyncio.run
In Python 3.7+, the recommended way to run a top-level coroutine with asyncio
is with asyncio.run
. In v0.13
we recommended against using this function due to initialization issues, but in v0.14
it's perfectly valid to use this function... with a caveat.
Since our Rust <--> Python conversions require a reference to the Python event loop, this poses a problem. Imagine we have a PyO3 Asyncio module that defines
a rust_sleep
function like in previous examples. You might rightfully assume that you can call pass this directly into asyncio.run
like this:
import asyncio
from my_async_module import rust_sleep
asyncio.run(rust_sleep())
You might be surprised to find out that this throws an error:
Traceback (most recent call last):
File "example.py", line 5, in <module>
asyncio.run(rust_sleep())
RuntimeError: no running event loop
What's happening here is that we are calling rust_sleep
before the future is
actually running on the event loop created by asyncio.run
. This is counter-intuitive, but expected behaviour, and unfortunately there doesn't seem to be a good way of solving this problem within PyO3 Asyncio itself.
However, we can make this example work with a simple workaround:
import asyncio
from my_async_module import rust_sleep
# Calling main will just construct the coroutine that later calls rust_sleep.
# - This ensures that rust_sleep will be called when the event loop is running,
# not before.
async def main():
await rust_sleep()
# Run the main() coroutine at the top-level instead
asyncio.run(main())
Non-standard Python Event Loops
Python allows you to use alternatives to the default asyncio
event loop. One
popular alternative is uvloop
. In v0.13
using non-standard event loops was
a bit of an ordeal, but in v0.14
it's trivial.
Using uvloop
in a PyO3 Asyncio Native Extensions
# Cargo.toml
[lib]
name = "my_async_module"
crate-type = ["cdylib"]
[dependencies]
pyo3 = { version = "0.14", features = ["extension-module"] }
pyo3-asyncio = { version = "0.14", features = ["tokio-runtime"] }
async-std = "1.9"
tokio = "1.4"
#![allow(unused)] fn main() { //! lib.rs use pyo3::{prelude::*, wrap_pyfunction}; #[pyfunction] fn rust_sleep(py: Python<'_>) -> PyResult<&PyAny> { pyo3_asyncio::tokio::future_into_py(py, async { tokio::time::sleep(std::time::Duration::from_secs(1)).await; Ok(Python::with_gil(|py| py.None())) }) } #[pymodule] fn my_async_module(_py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_function(wrap_pyfunction!(rust_sleep, m)?)?; Ok(()) } }
$ maturin develop && python3
🔗 Found pyo3 bindings
🐍 Found CPython 3.8 at python3
Finished dev [unoptimized + debuginfo] target(s) in 0.04s
Python 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import asyncio
>>> import uvloop
>>>
>>> import my_async_module
>>>
>>> uvloop.install()
>>>
>>> async def main():
... await my_async_module.rust_sleep()
...
>>> asyncio.run(main())
>>>
Using uvloop
in Rust Applications
Using uvloop
in Rust applications is a bit trickier, but it's still possible
with relatively few modifications.
Unfortunately, we can't make use of the #[pyo3_asyncio::<runtime>::main]
attribute with non-standard event loops. This is because the #[pyo3_asyncio::<runtime>::main]
proc macro has to interact with the Python
event loop before we can install the uvloop
policy.
[dependencies]
async-std = "1.9"
pyo3 = "0.14"
pyo3-asyncio = { version = "0.14", features = ["async-std-runtime"] }
//! main.rs use pyo3::{prelude::*, types::PyType}; fn main() -> PyResult<()> { pyo3::prepare_freethreaded_python(); Python::with_gil(|py| { let uvloop = py.import("uvloop")?; uvloop.call_method0("install")?; // store a reference for the assertion let uvloop = PyObject::from(uvloop); pyo3_asyncio::async_std::run(py, async move { // verify that we are on a uvloop.Loop Python::with_gil(|py| -> PyResult<()> { assert!(pyo3_asyncio::async_std::get_current_loop(py)?.is_instance( uvloop .as_ref(py) .getattr("Loop")? .downcast::<PyType>() .unwrap() )?); Ok(()) })?; async_std::task::sleep(std::time::Duration::from_secs(1)).await; Ok(()) }) }) }
Additional Information
- Managing event loop references can be tricky with pyo3-asyncio. See Event Loop References in the API docs to get a better intuition for how event loop references are managed in this library.
- Testing pyo3-asyncio libraries and applications requires a custom test harness since Python requires control over the main thread. You can find a testing guide in the API docs for the
testing
module
Frequently Asked Questions and troubleshooting
I'm experiencing deadlocks using PyO3 with lazy_static or once_cell!
lazy_static
and once_cell::sync
both use locks to ensure that initialization is performed only by a single thread. Because the Python GIL is an additional lock this can lead to deadlocks in the following way:
- A thread (thread A) which has acquired the Python GIL starts initialization of a
lazy_static
value. - The initialization code calls some Python API which temporarily releases the GIL e.g.
Python::import
. - Another thread (thread B) acquires the Python GIL and attempts to access the same
lazy_static
value. - Thread B is blocked, because it waits for
lazy_static
's initialization to lock to release. - Thread A is blocked, because it waits to re-acquire the GIL which thread B still holds.
- Deadlock.
PyO3 provides a struct GILOnceCell
which works equivalently to OnceCell
but relies solely on the Python GIL for thread safety. This means it can be used in place of lazy_static
or once_cell
where you are experiencing the deadlock described above. See the documentation for GILOnceCell
for an example how to use it.
I can't run cargo test
; or I can't build in a Cargo workspace: I'm having linker issues like "Symbol not found" or "Undefined reference to _PyExc_SystemError"!
Currently, #340 causes cargo test
to fail with linking errors when the extension-module
feature is activated. Linking errors can also happen when building in a cargo workspace where a different crate also uses PyO3 (see #2521). For now, there are three ways we can work around these issues.
- Make the
extension-module
feature optional. Build withmaturin develop --features "extension-module"
[dependencies.pyo3]
version = "0.17.3"
[features]
extension-module = ["pyo3/extension-module"]
- Make the
extension-module
feature optional and default. Run tests withcargo test --no-default-features
:
[dependencies.pyo3]
version = "0.17.3"
[features]
extension-module = ["pyo3/extension-module"]
default = ["extension-module"]
- If you are using a
pyproject.toml
file to control maturin settings, add the following section:
[tool.maturin]
features = ["pyo3/extension-module"]
# Or for maturin 0.12:
# cargo-extra-args = ["--features", "pyo3/extension-module"]
I can't run cargo test
: my crate cannot be found for tests in tests/
directory!
The Rust book suggests to put integration tests inside a tests/
directory.
For a PyO3 extension-module
project where the crate-type
is set to "cdylib"
in your Cargo.toml
,
the compiler won't be able to find your crate and will display errors such as E0432
or E0463
:
error[E0432]: unresolved import `my_crate`
--> tests/test_my_crate.rs:1:5
|
1 | use my_crate;
| ^^^^^^^^^^^^ no external crate `my_crate`
The best solution is to make your crate types include both rlib
and cdylib
:
# Cargo.toml
[lib]
crate-type = ["cdylib", "rlib"]
Ctrl-C doesn't do anything while my Rust code is executing!
This is because Ctrl-C raises a SIGINT signal, which is handled by the calling Python process by simply setting a flag to action upon later. This flag isn't checked while Rust code called from Python is executing, only once control returns to the Python interpreter.
You can give the Python interpreter a chance to process the signal properly by calling Python::check_signals
. It's good practice to call this function regularly if you have a long-running Rust function so that your users can cancel it.
#[pyo3(get)]
clones my field!
You may have a nested struct similar to this:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] #[derive(Clone)] struct Inner { /* fields omitted */ } #[pyclass] struct Outer { #[pyo3(get)] inner: Inner, } #[pymethods] impl Outer { #[new] fn __new__() -> Self { Self { inner: Inner {} } } } }
When Python code accesses Outer
's field, PyO3 will return a new object on every access (note that their addresses are different):
outer = Outer()
a = outer.inner
b = outer.inner
assert a is b, f"a: {a}\nb: {b}"
AssertionError: a: <builtins.Inner object at 0x00000238FFB9C7B0>
b: <builtins.Inner object at 0x00000238FFB9C830>
This can be especially confusing if the field is mutable, as getting the field and then mutating it won't persist - you'll just get a fresh clone of the original on the next access. Unfortunately Python and Rust don't agree about ownership - if PyO3 gave out references to (possibly) temporary Rust objects to Python code, Python code could then keep that reference alive indefinitely. Therefore returning Rust objects requires cloning.
If you don't want that cloning to happen, a workaround is to allocate the field on the Python heap and store a reference to that, by using Py<...>
:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] #[derive(Clone)] struct Inner { /* fields omitted */ } #[pyclass] struct Outer { #[pyo3(get)] inner: Py<Inner>, } #[pymethods] impl Outer { #[new] fn __new__(py: Python<'_>) -> PyResult<Self> { Ok(Self { inner: Py::new(py, Inner {})?, }) } } }
This time a
and b
are the same object:
outer = Outer()
a = outer.inner
b = outer.inner
assert a is b, f"a: {a}\nb: {b}"
print(f"a: {a}\nb: {b}")
a: <builtins.Inner object at 0x0000020044FCC670>
b: <builtins.Inner object at 0x0000020044FCC670>
The downside to this approach is that any Rust code working on the Outer
struct now has to acquire the GIL to do anything with its field.
I want to use the pyo3
crate re-exported from from dependency but the proc-macros fail!
All PyO3 proc-macros (#[pyclass]
, #[pyfunction]
, #[derive(FromPyObject)]
and so on) expect the pyo3
crate to be available under that name in your crate
root, which is the normal situation when pyo3
is a direct dependency of your
crate.
However, when the dependency is renamed, or your crate only indirectly depends
on pyo3
, you need to let the macro code know where to find the crate. This is
done with the crate
attribute:
#![allow(unused)] fn main() { use pyo3::prelude::*; pub extern crate pyo3; mod reexported { pub use ::pyo3; } #[pyclass] #[pyo3(crate = "reexported::pyo3")] struct MyClass; }
Migrating from older PyO3 versions
This guide can help you upgrade code through breaking changes from one PyO3 version to the next. For a detailed list of all changes, see the CHANGELOG.
from 0.16.* to 0.17
Type checks have been changed for PyMapping
and PySequence
types
Previously the type checks for PyMapping
and PySequence
(implemented in PyTryFrom
)
used the Python C-API functions PyMapping_Check
and PySequence_Check
.
Unfortunately these functions are not sufficient for distinguishing such types,
leading to inconsistent behavior (see
pyo3/pyo3#2072).
PyO3 0.17 changes these downcast checks to explicityly test if the type is a
subclass of the corresponding abstract base class collections.abc.Mapping
or
collections.abc.Sequence
. Note this requires calling into Python, which may
incur a performance penalty over the previous method. If this performance
penatly is a problem, you may be able to perform your own checks and use
try_from_unchecked
(unsafe).
Another side-effect is that a pyclass defined in Rust with PyO3 will need to
be registered with the corresponding Python abstract base class for
downcasting to succeed. PySequence::register
and PyMapping:register
have
been added to make it easy to do this from Rust code. These are equivalent to
calling collections.abc.Mapping.register(MappingPyClass)
or
collections.abc.Sequence.register(SequencePyClass)
from Python.
For example, for a mapping class defined in Rust:
#![allow(unused)] fn main() { use pyo3::prelude::*; use std::collections::HashMap; #[pyclass(mapping)] struct Mapping { index: HashMap<String, usize>, } #[pymethods] impl Mapping { #[new] fn new(elements: Option<&PyList>) -> PyResult<Self> { // ... // truncated implementation of this mapping pyclass - basically a wrapper around a HashMap } }
You must register the class with collections.abc.Mapping
before the downcast will work:
#![allow(unused)] fn main() { let m = Py::new(py, Mapping { index }).unwrap(); assert!(m.as_ref(py).downcast::<PyMapping>().is_err()); PyMapping::register::<Mapping>(py).unwrap(); assert!(m.as_ref(py).downcast::<PyMapping>().is_ok()); }
Note that this requirement may go away in the future when a pyclass is able to inherit from the abstract base class directly (see pyo3/pyo3#991).
### The multiple-pymethods
feature now requires Rust 1.62
Due to limitations in the inventory
crate which the multiple-pymethods
feature depends on, this feature now
requires Rust 1.62. For more information see dtolnay/inventory#32.
Added impl IntoPy<Py<PyString>> for &str
This may cause inference errors.
Before:
use pyo3::prelude::*; fn main() { Python::with_gil(|py| { // Cannot infer either `Py<PyAny>` or `Py<PyString>` let _test = "test".into_py(py); }); }
After, some type annotations may be necessary:
use pyo3::prelude::*; fn main() { Python::with_gil(|py| { let _test: Py<PyAny> = "test".into_py(py); }); }
The pyproto
feature is now disabled by default
In preparation for removing the deprecated #[pyproto]
attribute macro in a future PyO3 version, it is now gated behind an opt-in feature flag. This also gives a slight saving to compile times for code which does not use the deprecated macro.
PyTypeObject
trait has been deprecated
The PyTypeObject
trait already was near-useless; almost all functionality was already on the PyTypeInfo
trait, which PyTypeObject
had a blanket implementation based upon. In PyO3 0.17 the final method, PyTypeObject::type_object
was moved to PyTypeInfo::type_object
.
To migrate, update trait bounds and imports from PyTypeObject
to PyTypeInfo
.
Before:
#![allow(unused)] fn main() { use pyo3::Python; use pyo3::type_object::PyTypeObject; use pyo3::types::PyType; fn get_type_object<T: PyTypeObject>(py: Python<'_>) -> &PyType { T::type_object(py) } }
After
#![allow(unused)] fn main() { use pyo3::{Python, PyTypeInfo}; use pyo3::types::PyType; fn get_type_object<T: PyTypeInfo>(py: Python<'_>) -> &PyType { T::type_object(py) } Python::with_gil(|py| { get_type_object::<pyo3::types::PyList>(py); }); }
impl<T, const N: usize> IntoPy<PyObject> for [T; N]
now requires T: IntoPy
rather than T: ToPyObject
If this leads to errors, simply implement IntoPy
. Because pyclasses already implement IntoPy
, you probably don't need to worry about this.
Each #[pymodule]
can now only be initialized once per process
To make PyO3 modules sound in the presence of Python sub-interpreters, for now it has been necessary to explicitly disable the ability to initialize a #[pymodule]
more than once in the same process. Attempting to do this will now raise an ImportError
.
from 0.15.* to 0.16
Drop support for older technologies
PyO3 0.16 has increased minimum Rust version to 1.48 and minimum Python version to 3.7. This enables use of newer language features (enabling some of the other additions in 0.16) and simplifies maintenance of the project.
#[pyproto]
has been deprecated
In PyO3 0.15, the #[pymethods]
attribute macro gained support for implementing "magic methods" such as __str__
(aka "dunder" methods). This implementation was not quite finalized at the time, with a few edge cases to be decided upon. The existing #[pyproto]
attribute macro was left untouched, because it covered these edge cases.
In PyO3 0.16, the #[pymethods]
implementation has been completed and is now the preferred way to implement magic methods. To allow the PyO3 project to move forward, #[pyproto]
has been deprecated (with expected removal in PyO3 0.18).
Migration from #[pyproto]
to #[pymethods]
is straightforward; copying the existing methods directly from the #[pyproto]
trait implementation is all that is needed in most cases.
Before:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::class::{PyBasicProtocol, PyIterProtocol}; use pyo3::types::PyString; #[pyclass] struct MyClass { } #[pyproto] impl PyBasicProtocol for MyClass { fn __str__(&self) -> &'static [u8] { b"hello, world" } } #[pyproto] impl PyIterProtocol for MyClass { fn __iter__(slf: PyRef<self>) -> PyResult<&PyAny> { PyString::new(slf.py(), "hello, world").iter() } } }
After
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::PyString; #[pyclass] struct MyClass { } #[pymethods] impl MyClass { fn __str__(&self) -> &'static [u8] { b"hello, world" } fn __iter__(slf: PyRef<self>) -> PyResult<&PyAny> { PyString::new(slf.py(), "hello, world").iter() } } }
Removed PartialEq
for object wrappers
The Python object wrappers Py
and PyAny
had implementations of PartialEq
so that object_a == object_b
would compare the Python objects for pointer
equality, which corresponds to the is
operator, not the ==
operator in
Python. This has been removed in favor of a new method: use
object_a.is(object_b)
. This also has the advantage of not requiring the same
wrapper type for object_a
and object_b
; you can now directly compare a
Py<T>
with a &PyAny
without having to convert.
To check for Python object equality (the Python ==
operator), use the new
method eq()
.
Container magic methods now match Python behavior
In PyO3 0.15, __getitem__
, __setitem__
and __delitem__
in #[pymethods]
would generate only the mapping implementation for a #[pyclass]
. To match the Python behavior, these methods now generate both the mapping and sequence implementations.
This means that classes implementing these #[pymethods]
will now also be treated as sequences, same as a Python class
would be. Small differences in behavior may result:
- PyO3 will allow instances of these classes to be cast to
PySequence
as well asPyMapping
. - Python will provide a default implementation of
__iter__
(if the class did not have one) which repeatedly calls__getitem__
with integers (starting at 0) until anIndexError
is raised.
To explain this in detail, consider the following Python class:
class ExampleContainer:
def __len__(self):
return 5
def __getitem__(self, idx: int) -> int:
if idx < 0 or idx > 5:
raise IndexError()
return idx
This class implements a Python sequence.
The __len__
and __getitem__
methods are also used to implement a Python mapping. In the Python C-API, these methods are not shared: the sequence __len__
and __getitem__
are defined by the sq_length
and sq_item
slots, and the mapping equivalents are mp_length
and mp_subscript
. There are similar distinctions for __setitem__
and __delitem__
.
Because there is no such distinction from Python, implementing these methods will fill the mapping and sequence slots simultaneously. A Python class with __len__
implemented, for example, will have both the sq_length
and mp_length
slots filled.
The PyO3 behavior in 0.16 has been changed to be closer to this Python behavior by default.
wrap_pymodule!
and wrap_pyfunction!
now respect privacy correctly
Prior to PyO3 0.16 the wrap_pymodule!
and wrap_pyfunction!
macros could use modules and functions whose defining fn
was not reachable according Rust privacy rules.
For example, the following code was legal before 0.16, but in 0.16 is rejected because the wrap_pymodule!
macro cannot access the private_submodule
function:
#![allow(unused)] fn main() { mod foo { use pyo3::prelude::*; #[pymodule] fn private_submodule(_py: Python<'_>, m: &PyModule) -> PyResult<()> { Ok(()) } } use pyo3::prelude::*; use foo::*; #[pymodule] fn my_module(_py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_wrapped(wrap_pymodule!(private_submodule))?; Ok(()) } }
To fix it, make the private submodule visible, e.g. with pub
or pub(crate)
.
#![allow(unused)] fn main() { mod foo { use pyo3::prelude::*; #[pymodule] pub(crate) fn private_submodule(_py: Python<'_>, m: &PyModule) -> PyResult<()> { Ok(()) } } use pyo3::prelude::*; use pyo3::wrap_pymodule; use foo::*; #[pymodule] fn my_module(_py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_wrapped(wrap_pymodule!(private_submodule))?; Ok(()) } }
from 0.14.* to 0.15
Changes in sequence indexing
For all types that take sequence indices (PyList
, PyTuple
and PySequence
),
the API has been made consistent to only take usize
indices, for consistency
with Rust's indexing conventions. Negative indices, which were only
sporadically supported even in APIs that took isize
, now aren't supported
anywhere.
Further, the get_item
methods now always return a PyResult
instead of
panicking on invalid indices. The Index
trait has been implemented instead,
and provides the same panic behavior as on Rust vectors.
Note that slice indices (accepted by PySequence::get_slice
and other) still
inherit the Python behavior of clamping the indices to the actual length, and
not panicking/returning an error on out of range indices.
An additional advantage of using Rust's indexing conventions for these types is that these types can now also support Rust's indexing operators as part of a consistent API:
#![allow(unused)] fn main() { use pyo3::{Python, types::PyList}; Python::with_gil(|py| { let list = PyList::new(py, &[1, 2, 3]); assert_eq!(list[0..2].to_string(), "[1, 2]"); }); }
from 0.13.* to 0.14
auto-initialize
feature is now opt-in
For projects embedding Python in Rust, PyO3 no longer automatically initializes a Python interpreter on the first call to Python::with_gil
(or Python::acquire_gil
) unless the auto-initialize
feature is enabled.
New multiple-pymethods
feature
#[pymethods]
have been reworked with a simpler default implementation which removes the dependency on the inventory
crate. This reduces dependencies and compile times for the majority of users.
The limitation of the new default implementation is that it cannot support multiple #[pymethods]
blocks for the same #[pyclass]
. If you need this functionality, you must enable the multiple-pymethods
feature which will switch #[pymethods]
to the inventory-based implementation.
Deprecated #[pyproto]
methods
Some protocol (aka __dunder__
) methods such as __bytes__
and __format__
have been possible to implement two ways in PyO3 for some time: via a #[pyproto]
(e.g. PyBasicProtocol
for the methods listed here), or by writing them directly in #[pymethods]
. This is only true for a handful of the #[pyproto]
methods (for technical reasons to do with the way PyO3 currently interacts with the Python C-API).
In the interest of having onle one way to do things, the #[pyproto]
forms of these methods have been deprecated.
To migrate just move the affected methods from a #[pyproto]
to a #[pymethods]
block.
Before:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::class::basic::PyBasicProtocol; #[pyclass] struct MyClass { } #[pyproto] impl PyBasicProtocol for MyClass { fn __bytes__(&self) -> &'static [u8] { b"hello, world" } } }
After:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass { } #[pymethods] impl MyClass { fn __bytes__(&self) -> &'static [u8] { b"hello, world" } } }
from 0.12.* to 0.13
Minimum Rust version increased to Rust 1.45
PyO3 0.13
makes use of new Rust language features stabilised between Rust 1.40 and Rust 1.45. If you are using a Rust compiler older than Rust 1.45, you will need to update your toolchain to be able to continue using PyO3.
Runtime changes to support the CPython limited API
In PyO3 0.13
support was added for compiling against the CPython limited API. This had a number of implications for all PyO3 users, described here.
The largest of these is that all types created from PyO3 are what CPython calls "heap" types. The specific implications of this are:
- If you wish to subclass one of these types from Rust you must mark it
#[pyclass(subclass)]
, as you would if you wished to allow subclassing it from Python code. - Type objects are now mutable - Python code can set attributes on them.
__module__
on types without#[pyclass(module="mymodule")]
no longer returnsbuiltins
, it now raisesAttributeError
.
from 0.11.* to 0.12
PyErr
has been reworked
In PyO3 0.12
the PyErr
type has been re-implemented to be significantly more compatible with
the standard Rust error handling ecosystem. Specifically PyErr
now implements
Error + Send + Sync
, which are the standard traits used for error types.
While this has necessitated the removal of a number of APIs, the resulting PyErr
type should now
be much more easier to work with. The following sections list the changes in detail and how to
migrate to the new APIs.
PyErr::new
and PyErr::from_type
now require Send + Sync
for their argument
For most uses no change will be needed. If you are trying to construct PyErr
from a value that is
not Send + Sync
, you will need to first create the Python object and then use
PyErr::from_instance
.
Similarly, any types which implemented PyErrArguments
will now need to be Send + Sync
.
PyErr
's contents are now private
It is no longer possible to access the fields .ptype
, .pvalue
and .ptraceback
of a PyErr
.
You should instead now use the new methods PyErr::ptype
, PyErr::pvalue
and PyErr::ptraceback
.
PyErrValue
and PyErr::from_value
have been removed
As these were part the internals of PyErr
which have been reworked, these APIs no longer exist.
If you used this API, it is recommended to use PyException::new_err
(see the section on
Exception types).
Into<PyResult<T>>
for PyErr
has been removed
This implementation was redundant. Just construct the Result::Err
variant directly.
Before:
#![allow(unused)] fn main() { let result: PyResult<()> = PyErr::new::<TypeError, _>("error message").into(); }
After (also using the new reworked exception types; see the following section):
#![allow(unused)] fn main() { use pyo3::{PyResult, exceptions::PyTypeError}; let result: PyResult<()> = Err(PyTypeError::new_err("error message")); }
Exception types have been reworked
Previously exception types were zero-sized marker types purely used to construct PyErr
. In PyO3
0.12, these types have been replaced with full definitions and are usable in the same way as PyAny
, PyDict
etc. This
makes it possible to interact with Python exception objects.
The new types also have names starting with the "Py" prefix. For example, before:
#![allow(unused)] fn main() { let err: PyErr = TypeError::py_err("error message"); }
After:
#![allow(unused)] fn main() { use pyo3::{PyErr, PyResult, Python, type_object::PyTypeObject}; use pyo3::exceptions::{PyBaseException, PyTypeError}; Python::with_gil(|py| -> PyResult<()> { let err: PyErr = PyTypeError::new_err("error message"); // Uses Display for PyErr, new for PyO3 0.12 assert_eq!(err.to_string(), "TypeError: error message"); // Now possible to interact with exception instances, new for PyO3 0.12 let instance: &PyBaseException = err.instance(py); assert_eq!(instance.getattr("__class__")?, PyTypeError::type_object(py).as_ref()); Ok(()) }).unwrap(); }
FromPy
has been removed
To simplify the PyO3 conversion traits, the FromPy
trait has been removed. Previously there were
two ways to define the to-Python conversion for a type:
FromPy<T> for PyObject
and IntoPy<PyObject> for T
.
Now there is only one way to define the conversion, IntoPy
, so downstream crates may need to
adjust accordingly.
Before:
#![allow(unused)] fn main() { use pyo3::prelude::*; struct MyPyObjectWrapper(PyObject); impl FromPy<MyPyObjectWrapper> for PyObject { fn from_py(other: MyPyObjectWrapper, _py: Python<'_>) -> Self { other.0 } } }
After
#![allow(unused)] fn main() { use pyo3::prelude::*; struct MyPyObjectWrapper(PyObject); impl IntoPy<PyObject> for MyPyObjectWrapper { fn into_py(self, _py: Python<'_>) -> PyObject { self.0 } } }
Similarly, code which was using the FromPy
trait can be trivially rewritten to use IntoPy
.
Before:
#![allow(unused)] fn main() { use pyo3::prelude::*; Python::with_gil(|py| { let obj = PyObject::from_py(1.234, py); }) }
After:
#![allow(unused)] fn main() { use pyo3::prelude::*; Python::with_gil(|py| { let obj: PyObject = 1.234.into_py(py); }) }
PyObject
is now a type alias of Py<PyAny>
This should change very little from a usage perspective. If you implemented traits for both
PyObject
and Py<T>
, you may find you can just remove the PyObject
implementation.
AsPyRef
has been removed
As PyObject
has been changed to be just a type alias, the only remaining implementor of AsPyRef
was Py<T>
. This removed the need for a trait, so the AsPyRef::as_ref
method has been moved to
Py::as_ref
.
This should require no code changes except removing use pyo3::AsPyRef
for code which did not use
pyo3::prelude::*
.
Before:
#![allow(unused)] fn main() { use pyo3::{AsPyRef, Py, types::PyList}; pyo3::Python::with_gil(|py| { let list_py: Py<PyList> = PyList::empty(py).into(); let list_ref: &PyList = list_py.as_ref(py); }) }
After:
#![allow(unused)] fn main() { use pyo3::{Py, types::PyList}; pyo3::Python::with_gil(|py| { let list_py: Py<PyList> = PyList::empty(py).into(); let list_ref: &PyList = list_py.as_ref(py); }) }
from 0.10.* to 0.11
Stable Rust
PyO3 now supports the stable Rust toolchain. The minimum required version is 1.39.0.
#[pyclass]
structs must now be Send
or unsendable
Because #[pyclass]
structs can be sent between threads by the Python interpreter, they must implement
Send
or declared as unsendable
(by #[pyclass(unsendable)]
).
Note that unsendable
is added in PyO3 0.11.1
and Send
is always required in PyO3 0.11.0
.
This may "break" some code which previously was accepted, even though it could be unsound. There can be two fixes:
-
If you think that your
#[pyclass]
actually must beSend
able, then let's implementSend
. A common, safer way is using thread-safe types. E.g.,Arc
instead ofRc
,Mutex
instead ofRefCell
, andBox<dyn Send + T>
instead ofBox<dyn T>
.Before:
#![allow(unused)] fn main() { use pyo3::prelude::*; use std::rc::Rc; use std::cell::RefCell; #[pyclass] struct NotThreadSafe { shared_bools: Rc<RefCell<Vec<bool>>>, closure: Box<dyn Fn()> } }
After:
#![allow(unused)] fn main() { #![allow(dead_code)] use pyo3::prelude::*; use std::sync::{Arc, Mutex}; #[pyclass] struct ThreadSafe { shared_bools: Arc<Mutex<Vec<bool>>>, closure: Box<dyn Fn() + Send> } }
In situations where you cannot change your
#[pyclass]
to automatically implementSend
(e.g., when it contains a raw pointer), you can useunsafe impl Send
. In such cases, care should be taken to ensure the struct is actually thread safe. See the Rustnomicon for more. -
If you think that your
#[pyclass]
should not be accessed by another thread, you can useunsendable
flag. A class marked withunsendable
panics when accessed by another thread, making it thread-safe to expose an unsendable object to the Python interpreter.Before:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct Unsendable { pointers: Vec<*mut std::os::raw::c_char>, } }
After:
#![allow(unused)] fn main() { #![allow(dead_code)] use pyo3::prelude::*; #[pyclass(unsendable)] struct Unsendable { pointers: Vec<*mut std::os::raw::c_char>, } }
All PyObject
and Py<T>
methods now take Python
as an argument
Previously, a few methods such as Object::get_refcnt
did not take Python
as an argument (to
ensure that the Python GIL was held by the current thread). Technically, this was not sound.
To migrate, just pass a py
argument to any calls to these methods.
Before:
#![allow(unused)] fn main() { pyo3::Python::with_gil(|py| { py.None().get_refcnt(); }) }
After:
#![allow(unused)] fn main() { pyo3::Python::with_gil(|py| { py.None().get_refcnt(py); }) }
from 0.9.* to 0.10
ObjectProtocol
is removed
All methods are moved to PyAny
.
And since now all native types (e.g., PyList
) implements Deref<Target=PyAny>
,
all you need to do is remove ObjectProtocol
from your code.
Or if you use ObjectProtocol
by use pyo3::prelude::*
, you have to do nothing.
Before:
#![allow(unused)] fn main() { use pyo3::ObjectProtocol; pyo3::Python::with_gil(|py| { let obj = py.eval("lambda: 'Hi :)'", None, None).unwrap(); let hi: &pyo3::types::PyString = obj.call0().unwrap().downcast().unwrap(); assert_eq!(hi.len().unwrap(), 5); }) }
After:
#![allow(unused)] fn main() { pyo3::Python::with_gil(|py| { let obj = py.eval("lambda: 'Hi :)'", None, None).unwrap(); let hi: &pyo3::types::PyString = obj.call0().unwrap().downcast().unwrap(); assert_eq!(hi.len().unwrap(), 5); }) }
No #![feature(specialization)]
in user code
While PyO3 itself still requires specialization and nightly Rust,
now you don't have to use #![feature(specialization)]
in your crate.
from 0.8.* to 0.9
#[new]
interface
PyRawObject
is now removed and our syntax for constructors has changed.
Before:
#![allow(unused)] fn main() { #[pyclass] struct MyClass {} #[pymethods] impl MyClass { #[new] fn new(obj: &PyRawObject) { obj.init(MyClass { }) } } }
After:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass {} #[pymethods] impl MyClass { #[new] fn new() -> Self { MyClass {} } } }
Basically you can return Self
or Result<Self>
directly.
For more, see the constructor section of this guide.
PyCell
PyO3 0.9 introduces PyCell
, which is a RefCell
-like object wrapper
for ensuring Rust's rules regarding aliasing of references are upheld.
For more detail, see the
Rust Book's section on Rust's rules of references
For #[pymethods]
or #[pyfunction]
s, your existing code should continue to work without any change.
Python exceptions will automatically be raised when your functions are used in a way which breaks Rust's
rules of references.
Here is an example.
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct Names { names: Vec<String> } #[pymethods] impl Names { #[new] fn new() -> Self { Names { names: vec![] } } fn merge(&mut self, other: &mut Names) { self.names.append(&mut other.names) } } Python::with_gil(|py| { let names = PyCell::new(py, Names::new()).unwrap(); pyo3::py_run!(py, names, r" try: names.merge(names) assert False, 'Unreachable' except RuntimeError as e: assert str(e) == 'Already borrowed' "); }) }
Names
has a merge
method, which takes &mut self
and another argument of type &mut Self
.
Given this #[pyclass]
, calling names.merge(names)
in Python raises
a PyBorrowMutError
exception, since it requires two mutable borrows of names
.
However, for #[pyproto]
and some functions, you need to manually fix the code.
Object creation
In 0.8 object creation was done with PyRef::new
and PyRefMut::new
.
In 0.9 these have both been removed.
To upgrade code, please use
PyCell::new
instead.
If you need PyRef
or PyRefMut
, just call .borrow()
or .borrow_mut()
on the newly-created PyCell
.
Before:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass {} Python::with_gil(|py| { let obj_ref = PyRef::new(py, MyClass {}).unwrap(); }) }
After:
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass {} Python::with_gil(|py| { let obj = PyCell::new(py, MyClass {}).unwrap(); let obj_ref = obj.borrow(); }) }
Object extraction
For PyClass
types T
, &T
and &mut T
no longer have FromPyObject
implementations.
Instead you should extract PyRef<T>
or PyRefMut<T>
, respectively.
If T
implements Clone
, you can extract T
itself.
In addition, you can also extract &PyCell<T>
, though you rarely need it.
Before:
let obj: &PyAny = create_obj();
let obj_ref: &MyClass = obj.extract().unwrap();
let obj_ref_mut: &mut MyClass = obj.extract().unwrap();
After:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::IntoPyDict; #[pyclass] #[derive(Clone)] struct MyClass {} #[pymethods] impl MyClass { #[new]fn new() -> Self { MyClass {} }} Python::with_gil(|py| { let typeobj = py.get_type::<MyClass>(); let d = [("c", typeobj)].into_py_dict(py); let create_obj = || py.eval("c()", None, Some(d)).unwrap(); let obj: &PyAny = create_obj(); let obj_cell: &PyCell<MyClass> = obj.extract().unwrap(); let obj_cloned: MyClass = obj.extract().unwrap(); // extracted by cloning the object { let obj_ref: PyRef<'_, MyClass> = obj.extract().unwrap(); // we need to drop obj_ref before we can extract a PyRefMut due to Rust's rules of references } let obj_ref_mut: PyRefMut<'_, MyClass> = obj.extract().unwrap(); }) }
#[pyproto]
Most of the arguments to methods in #[pyproto]
impls require a
FromPyObject
implementation.
So if your protocol methods take &T
or &mut T
(where T: PyClass
),
please use PyRef
or PyRefMut
instead.
Before:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::class::PySequenceProtocol; #[pyclass] struct ByteSequence { elements: Vec<u8>, } #[pyproto] impl PySequenceProtocol for ByteSequence { fn __concat__(&self, other: &Self) -> PyResult<Self> { let mut elements = self.elements.clone(); elements.extend_from_slice(&other.elements); Ok(Self { elements }) } } }
After:
#![allow(unused)] fn main() { #[allow(deprecated)] #[cfg(feature = "pyproto")] { use pyo3::prelude::*; use pyo3::class::PySequenceProtocol; #[pyclass] struct ByteSequence { elements: Vec<u8>, } #[pyproto] impl PySequenceProtocol for ByteSequence { fn __concat__(&self, other: PyRef<'p, Self>) -> PyResult<Self> { let mut elements = self.elements.clone(); elements.extend_from_slice(&other.elements); Ok(Self { elements }) } } } }
PyO3 and rust-cpython
PyO3 began as fork of rust-cpython when rust-cpython wasn't maintained. Over time PyO3 has become fundamentally different from rust-cpython.
Macros
While rust-cpython has a macro_rules!
based dsl for declaring modules and classes, PyO3 uses proc macros. PyO3 also doesn't change your struct and functions so you can still use them as normal Rust functions.
rust-cpython
py_class!(class MyClass |py| {
data number: i32;
def __new__(_cls, arg: i32) -> PyResult<MyClass> {
MyClass::create_instance(py, arg)
}
def half(&self) -> PyResult<i32> {
Ok(self.number(py) / 2)
}
});
pyo3
#![allow(unused)] fn main() { use pyo3::prelude::*; #[pyclass] struct MyClass { num: u32, } #[pymethods] impl MyClass { #[new] fn new(num: u32) -> Self { MyClass { num } } fn half(&self) -> PyResult<u32> { Ok(self.num / 2) } } }
Ownership and lifetimes
While in rust-cpython you always own python objects, PyO3 allows efficient borrowed objects and most APIs are available with references.
Here is an example of the PyList API:
rust-cpython
impl PyList {
fn new(py: Python<'_>) -> PyList {...}
fn get_item(&self, py: Python<'_>, index: isize) -> PyObject {...}
}
pyo3
impl PyList {
fn new(py: Python<'_>) -> &PyList {...}
fn get_item(&self, index: isize) -> &PyAny {...}
}
In PyO3, all object references are bounded by the GIL lifetime.
So the owned Python object is not required, and it is safe to have functions like fn py<'p>(&'p self) -> Python<'p> {}
.
Error handling
rust-cpython requires a Python
parameter for constructing a PyErr
, so error handling ergonomics is pretty bad. It is not possible to use ?
with Rust errors.
PyO3 on other hand does not require Python
for constructing a PyErr
, it is only required if you want to raise an exception in Python with the PyErr::restore()
method. Due to various std::convert::From<E> for PyErr
implementations for Rust standard error types E
, propagating ?
is supported automatically.
Using in Python a Rust function with trait bounds
PyO3 allows for easy conversion from Rust to Python for certain functions and classes (see the conversion table. However, it is not always straightforward to convert Rust code that requires a given trait implementation as an argument.
This tutorial explains how to convert a Rust function that takes a trait as argument for use in Python with classes implementing the same methods as the trait.
Why is this useful?
Pros
- Make your Rust code available to Python users
- Code complex algorithms in Rust with the help of the borrow checker
Cons
- Not as fast as native Rust (type conversion has to be performed and one part of the code runs in Python)
- You need to adapt your code to expose it
Example
Let's work with the following basic example of an implementation of a optimization solver operating on a given model.
Let's say we have a function solve
that operates on a model and mutates its state.
The argument of the function can be any model that implements the Model
trait :
#![allow(unused)] fn main() { #![allow(dead_code)] pub trait Model { fn set_variables(&mut self, inputs: &Vec<f64>); fn compute(&mut self); fn get_results(&self) -> Vec<f64>; } pub fn solve<T: Model>(model: &mut T) { println!("Magic solver that mutates the model into a resolved state"); } }
Let's assume we have the following constraints:
- We cannot change that code as it runs on many Rust models.
- We also have many Python models that cannot be solved as this solver is not available in that language. Rewriting it in Python would be cumbersome and error-prone, as everything is already available in Rust.
How could we expose this solver to Python thanks to PyO3 ?
Implementation of the trait bounds for the Python class
If a Python class implements the same three methods as the Model
trait, it seems logical it could be adapted to use the solver.
However, it is not possible to pass a PyObject
to it as it does not implement the Rust trait (even if the Python model has the required methods).
In order to implement the trait, we must write a wrapper around the calls in Rust to the Python model. The method signatures must be the same as the trait, keeping in mind that the Rust trait cannot be changed for the purpose of making the code available in Python.
The Python model we want to expose is the following one, which already contains all the required methods:
class Model:
def set_variables(self, inputs):
self.inputs = inputs
def compute(self):
self.results = [elt**2 - 3 for elt in self.inputs]
def get_results(self):
return self.results
The following wrapper will call the Python model from Rust, using a struct to hold the model as a PyAny
object:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::PyAny; pub trait Model { fn set_variables(&mut self, inputs: &Vec<f64>); fn compute(&mut self); fn get_results(&self) -> Vec<f64>; } struct UserModel { model: Py<PyAny>, } impl Model for UserModel { fn set_variables(&mut self, var: &Vec<f64>) { println!("Rust calling Python to set the variables"); Python::with_gil(|py| { let values: Vec<f64> = var.clone(); let list: PyObject = values.into_py(py); let py_model = self.model.as_ref(py); py_model .call_method("set_variables", (list,), None) .unwrap(); }) } fn get_results(&self) -> Vec<f64> { println!("Rust calling Python to get the results"); Python::with_gil(|py| { self.model .as_ref(py) .call_method("get_results", (), None) .unwrap() .extract() .unwrap() }) } fn compute(&mut self) { println!("Rust calling Python to perform the computation"); Python::with_gil(|py| { self.model .as_ref(py) .call_method("compute", (), None) .unwrap(); }) } } }
Now that this bit is implemented, let's expose the model wrapper to Python. Let's add the PyO3 annotations and add a constructor:
#![allow(unused)] fn main() { #![allow(dead_code)] pub trait Model { fn set_variables(&mut self, inputs: &Vec<f64>); fn compute(&mut self); fn get_results(&self) -> Vec<f64>; } use pyo3::prelude::*; use pyo3::types::PyAny; #[pyclass] struct UserModel { model: Py<PyAny>, } #[pymodule] fn trait_exposure(_py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_class::<UserModel>()?; Ok(()) } #[pymethods] impl UserModel { #[new] pub fn new(model: Py<PyAny>) -> Self { UserModel { model } } } }
Now we add the PyO3 annotations to the trait implementation:
#[pymethods]
impl Model for UserModel {
// the previous trait implementation
}
However, the previous code will not compile. The compilation error is the following one:
error: #[pymethods] cannot be used on trait impl blocks
That's a bummer! However, we can write a second wrapper around these functions to call them directly. This wrapper will also perform the type conversions between Python and Rust.
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::PyAny; pub trait Model { fn set_variables(&mut self, inputs: &Vec<f64>); fn compute(&mut self); fn get_results(&self) -> Vec<f64>; } #[pyclass] struct UserModel { model: Py<PyAny>, } impl Model for UserModel { fn set_variables(&mut self, var: &Vec<f64>) { println!("Rust calling Python to set the variables"); Python::with_gil(|py| { let values: Vec<f64> = var.clone(); let list: PyObject = values.into_py(py); let py_model = self.model.as_ref(py); py_model .call_method("set_variables", (list,), None) .unwrap(); }) } fn get_results(&self) -> Vec<f64> { println!("Rust calling Python to get the results"); Python::with_gil(|py| { self.model .as_ref(py) .call_method("get_results", (), None) .unwrap() .extract() .unwrap() }) } fn compute(&mut self) { println!("Rust calling Python to perform the computation"); Python::with_gil(|py| { self.model .as_ref(py) .call_method("compute", (), None) .unwrap(); }) } } #[pymethods] impl UserModel { pub fn set_variables(&mut self, var: Vec<f64>) { println!("Set variables from Python calling Rust"); Model::set_variables(self, &var) } pub fn get_results(&mut self) -> Vec<f64> { println!("Get results from Python calling Rust"); Model::get_results(self) } pub fn compute(&mut self) { println!("Compute from Python calling Rust"); Model::compute(self) } } }
This wrapper handles the type conversion between the PyO3 requirements and the trait. In order to meet PyO3 requirements, this wrapper must:
- return an object of type
PyResult
- use only values, not references in the method signatures
Let's run the file python file:
class Model:
def set_variables(self, inputs):
self.inputs = inputs
def compute(self):
self.results = [elt**2 - 3 for elt in self.inputs]
def get_results(self):
return self.results
if __name__=="__main__":
import trait_exposure
myModel = Model()
my_rust_model = trait_exposure.UserModel(myModel)
my_rust_model.set_variables([2.0])
print("Print value from Python: ", myModel.inputs)
my_rust_model.compute()
print("Print value from Python through Rust: ", my_rust_model.get_results())
print("Print value directly from Python: ", myModel.get_results())
This outputs:
Set variables from Python calling Rust
Set variables from Rust calling Python
Print value from Python: [2.0]
Compute from Python calling Rust
Compute from Rust calling Python
Get results from Python calling Rust
Get results from Rust calling Python
Print value from Python through Rust: [1.0]
Print value directly from Python: [1.0]
We have now successfully exposed a Rust model that implements the Model
trait to Python!
We will now expose the solve
function, but before, let's talk about types errors.
Type errors in Python
What happens if you have type errors when using Python and how can you improve the error messages?
Wrong types in Python function arguments
Let's assume in the first case that you will use in your Python file my_rust_model.set_variables(2.0)
instead of my_rust_model.set_variables([2.0])
.
The Rust signature expects a vector, which corresponds to a list in Python. What happens if instead of a vector, we pass a single value ?
At the execution of Python, we get :
File "main.py", line 15, in <module>
my_rust_model.set_variables(2)
TypeError
It is a type error and Python points to it, so it's easy to identify and solve.
Wrong types in Python method signatures
Let's assume now that the return type of one of the methods of our Model class is wrong, for example the get_results
method that is expected to return a Vec<f64>
in Rust, a list in Python.
class Model:
def set_variables(self, inputs):
self.inputs = inputs
def compute(self):
self.results = [elt**2 -3 for elt in self.inputs]
def get_results(self):
return self.results[0]
#return self.results <-- this is the expected output
This call results in the following panic:
pyo3_runtime.PanicException: called `Result::unwrap()` on an `Err` value: PyErr { type: Py(0x10dcf79f0, PhantomData) }
This error code is not helpful for a Python user that does not know anything about Rust, or someone that does not know PyO3 was used to interface the Rust code.
However, as we are responsible for making the Rust code available to Python, we can do something about it.
The issue is that we called unwrap
anywhere we could, and therefore any panic from PyO3 will be directly forwarded to the end user.
Let's modify the code performing the type conversion to give a helpful error message to the Python user:
We used in our get_results
method the following call that performs the type conversion:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::PyAny; pub trait Model { fn set_variables(&mut self, inputs: &Vec<f64>); fn compute(&mut self); fn get_results(&self) -> Vec<f64>; } #[pyclass] struct UserModel { model: Py<PyAny>, } impl Model for UserModel { fn get_results(&self) -> Vec<f64> { println!("Rust calling Python to get the results"); Python::with_gil(|py| { self.model .as_ref(py) .call_method("get_results", (), None) .unwrap() .extract() .unwrap() }) } fn set_variables(&mut self, var: &Vec<f64>) { println!("Rust calling Python to set the variables"); Python::with_gil(|py| { let values: Vec<f64> = var.clone(); let list: PyObject = values.into_py(py); let py_model = self.model.as_ref(py); py_model .call_method("set_variables", (list,), None) .unwrap(); }) } fn compute(&mut self) { println!("Rust calling Python to perform the computation"); Python::with_gil(|py| { self.model .as_ref(py) .call_method("compute", (), None) .unwrap(); }) } } }
Let's break it down in order to perform better error handling:
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::PyAny; pub trait Model { fn set_variables(&mut self, inputs: &Vec<f64>); fn compute(&mut self); fn get_results(&self) -> Vec<f64>; } #[pyclass] struct UserModel { model: Py<PyAny>, } impl Model for UserModel { fn get_results(&self) -> Vec<f64> { println!("Get results from Rust calling Python"); Python::with_gil(|py| { let py_result: &PyAny = self .model .as_ref(py) .call_method("get_results", (), None) .unwrap(); if py_result.get_type().name().unwrap() != "list" { panic!("Expected a list for the get_results() method signature, got {}", py_result.get_type().name().unwrap()); } py_result.extract() }) .unwrap() } fn set_variables(&mut self, var: &Vec<f64>) { println!("Rust calling Python to set the variables"); Python::with_gil(|py| { let values: Vec<f64> = var.clone(); let list: PyObject = values.into_py(py); let py_model = self.model.as_ref(py); py_model .call_method("set_variables", (list,), None) .unwrap(); }) } fn compute(&mut self) { println!("Rust calling Python to perform the computation"); Python::with_gil(|py| { self.model .as_ref(py) .call_method("compute", (), None) .unwrap(); }) } } }
By doing so, you catch the result of the Python computation and check its type in order to be able to deliver a better error message before performing the unwrapping.
Of course, it does not cover all the possible wrong outputs: the user could return a list of strings instead of a list of floats. In this case, a runtime panic would still occur due to PyO3, but with an error message much more difficult to decipher for non-rust user.
It is up to the developer exposing the rust code to decide how much effort to invest into Python type error handling and improved error messages.
The final code
Now let's expose the solve()
function to make it available from Python.
It is not possible to directly expose the solve
function to Python, as the type conversion cannot be performed.
It requires an object implementing the Model
trait as input.
However, the UserModel
already implements this trait.
Because of this, we can write a function wrapper that takes the UserModel
--which has already been exposed to Python--as an argument in order to call the core function solve
.
It is also required to make the struct public.
#![allow(unused)] fn main() { use pyo3::prelude::*; use pyo3::types::PyAny; pub trait Model { fn set_variables(&mut self, var: &Vec<f64>); fn get_results(&self) -> Vec<f64>; fn compute(&mut self); } pub fn solve<T: Model>(model: &mut T) { println!("Magic solver that mutates the model into a resolved state"); } #[pyfunction] #[pyo3(name = "solve")] pub fn solve_wrapper(model: &mut UserModel) { solve(model); } #[pyclass] pub struct UserModel { model: Py<PyAny>, } #[pymodule] fn trait_exposure(_py: Python<'_>, m: &PyModule) -> PyResult<()> { m.add_class::<UserModel>()?; m.add_function(wrap_pyfunction!(solve_wrapper, m)?)?; Ok(()) } #[pymethods] impl UserModel { #[new] pub fn new(model: Py<PyAny>) -> Self { UserModel { model } } pub fn set_variables(&mut self, var: Vec<f64>) { println!("Set variables from Python calling Rust"); Model::set_variables(self, &var) } pub fn get_results(&mut self) -> Vec<f64> { println!("Get results from Python calling Rust"); Model::get_results(self) } pub fn compute(&mut self) { Model::compute(self) } } impl Model for UserModel { fn set_variables(&mut self, var: &Vec<f64>) { println!("Rust calling Python to set the variables"); Python::with_gil(|py| { let values: Vec<f64> = var.clone(); let list: PyObject = values.into_py(py); let py_model = self.model.as_ref(py); py_model .call_method("set_variables", (list,), None) .unwrap(); }) } fn get_results(&self) -> Vec<f64> { println!("Get results from Rust calling Python"); Python::with_gil(|py| { let py_result: &PyAny = self .model .as_ref(py) .call_method("get_results", (), None) .unwrap(); if py_result.get_type().name().unwrap() != "list" { panic!("Expected a list for the get_results() method signature, got {}", py_result.get_type().name().unwrap()); } py_result.extract() }) .unwrap() } fn compute(&mut self) { println!("Rust calling Python to perform the computation"); Python::with_gil(|py| { self.model .as_ref(py) .call_method("compute", (), None) .unwrap(); }) } } }
Typing and IDE hints for you Python package
PyO3 provides an easy to use interface to code native Python libraries in Rust. The accompanying Maturin allows you to build and publish them as a package. Yet, for the better user experience, Python libraries should provide typing hints and documentation for all public entities, so that IDEs can show them during development and type analyzing tools such as mypy
can use them to properly verify the code.
Currently the best solution for the problem is to maintain manually the *.pyi
files and ship them along with the package.
The pyi
files introduction
pyi
(an abbreviation for Python Interface
) is called a Stub File
in most of the documentations related to them. Very good definition of what it is can be found in old MyPy documentation:
A stubs file only contains a description of the public interface of the module without any implementations.
Probably most Python developers encountered them already when trying to use the IDE "Go to Definition" function on any builtin type. For example the definitions of few standard exceptions look like this:
class BaseException(object):
args: Tuple[Any, ...]
__cause__: BaseException | None
__context__: BaseException | None
__suppress_context__: bool
__traceback__: TracebackType | None
def __init__(self, *args: object) -> None: ...
def __str__(self) -> str: ...
def __repr__(self) -> str: ...
def with_traceback(self: _TBE, tb: TracebackType | None) -> _TBE: ...
class SystemExit(BaseException):
code: int
class Exception(BaseException): ...
class StopIteration(Exception):
value: Any
As we can see those are not full definitions containing implementation, but just a description of interface. It is usually all that is needed by the user of the library.
What does the PEPs say?
As of the time of writing this documentation the pyi
files are referenced in three PEPs.
PEP8 - Style Guide for Python Code - #Function Annotations (last point) recommends all third party library creators to provide stub files as the source of knowledge about the package for type checker tools.
(...) it is expected that users of third party library packages may want to run type checkers over those packages. For this purpose PEP 484 recommends the use of stub files: .pyi files that are read by the type checker in preference of the corresponding .py files. (...)
PEP484 - Type Hints - #Stub Files defines stub files as follows.
Stub files are files containing type hints that are only for use by the type checker, not at runtime.
It contains a specification for them (highly recommended reading, since it contains at least one thing that is not used in normal Python code) and also some general information about where to store the stub files.
PEP561 - Distributing and Packaging Type Information describes in detail how to build packages that will enable type checking. In particular it contains information about how the stub files must be distributed in order for type checkers to use them.
How to do it?
PEP561 recognizes three ways of distributing type information:
inline
- the typing is placed directly in source (py
) files;separate package with stub files
- the typing is placed inpyi
files distributed in their own, separate package;in-package stub files
- the typing is placed inpyi
files distributed in the same package as source files.
The first way is tricky with PyO3 since we do not have py
files. When it will be investigated and necessary changes are implemented, this document will be updated.
The second way is easy to do, and the whole work can be fully separated from the main library code. The example repo for the package with stub files can be found in PEP561 references section: Stub package repository
The third way is described below.
Including pyi
files in your PyO3/Maturin build package
When source files are in the same package as stub files, they should be placed next to each other. We need a way to do that with Maturin. Also, in order to mark our package as typing-enabled we need to add an empty file named py.typed
to the package.
If you do not have other Python files
If you do not need to add any other Python files apart from pyi
to the package, the Maturin provides a way to do most of the work for you. As documented in Maturin Guide the only thing you need to do is create a stub file for your module named <module_name>.pyi
in your project root and Maturin will do the rest.
my-rust-project/
├── Cargo.toml
├── my_project.pyi # <<< add type stubs for Rust functions in the my_project module here
├── pyproject.toml
└── src
└── lib.rs
For example of pyi
file see my_project.pyi
content section.
If you need other Python files
If you need to add other Python files apart from pyi
to the package, you can do it also, but that requires some more work. Maturin provides easy way to add files to package (documentation). You just need to create a folder with the name of your module next to the Cargo.toml
file (for customization see documentation linked above).
The folder structure would be:
my-project
├── Cargo.toml
├── my_project
│ ├── __init__.py
│ ├── my_project.pyi
│ ├── other_python_file.py
│ └── py.typed
├── pyproject.toml
├── Readme.md
└── src
└── lib.rs
Let's go a little bit more into details on the files inside the package folder.
__init__.py
content
As we now specify our own package content, we have to provide the __init__.py
file, so the folder is treated as a package and we can import things from it. We can always use the same content that the Maturin creates for us if we do not specify a python source folder. For PyO3 bindings it would be:
from .my_project import *
That way everything that is exposed by our native module can be imported directly from the package.
py.typed
requirement
As stated in PEP561:
Package maintainers who wish to support type checking of their code MUST add a marker file named py.typed to their package supporting typing. This marker applies recursively: if a top-level package includes it, all its sub-packages MUST support type checking as well.
If we do not include that file, some IDEs might still use our pyi
files to show hints, but the type checkers might not. MyPy will raise an error in this situation:
error: Skipping analyzing "my_project": found module but no type hints or library stubs
The file is just a marker file, so it should be empty.
my_project.pyi
content
Our module stub file. This document does not aim at describing how to write them, since you can find a lot of documentation on it, starting from already quoted PEP484.
The example can look like this:
class Car:
"""
A class representing a car.
:param body_type: the name of body type, e.g. hatchback, sedan
:param horsepower: power of the engine in horsepower
"""
def __init__(self, body_type: str, horsepower: int) -> None: ...
@classmethod
def from_unique_name(cls, name: str) -> 'Car':
"""
Creates a Car based on unique name
:param name: model name of a car to be created
:return: a Car instance with default data
"""
def best_color(self) -> str:
"""
Gets the best color for the car.
:return: the name of the color our great algorithm thinks is the best for this car
"""
Changelog
All notable changes to this project will be documented in this file. For help with updating to new PyO3 versions, please see the migration guide.
The format is based on Keep a Changelog and this project adheres to Semantic Versioning.
To see unreleased changes, please see the CHANGELOG on the main branch guide.
0.17.3 - 2022-11-01
Packaging
- Support Python 3.11. (Previous versions of PyO3 0.17 have been tested against Python 3.11 release candidates and are expected to be compatible, this is the first version tested against Python 3.11.0.) #2708
Added
- Implemented
ExactSizeIterator
forPyListIterator
,PyDictIterator
,PySetIterator
andPyFrozenSetIterator
. #2676
Fixed
- Fix regression of
impl FromPyObject for [T; N]
no longer accepting types passingPySequence_Check
, e.g. NumPy arrays, since version 0.17.0. This the same fix that was appliedimpl FromPyObject for Vec<T>
in version 0.17.1 extended to fixed-size arrays. #2675 - Fix UB in
FunctionDescription::extract_arguments_fastcall
due to creating slices from a null pointer. #2687
0.17.2 - 2022-10-04
Packaging
- Added optional
chrono
feature to convertchrono
types into types in thedatetime
module. #2612
Added
- Add support for
num-bigint
feature onPyPy
. #2626
Fixed
- Correctly implement
__richcmp__
for enums, fixing__ne__
returning always returningTrue
. #2622 - Fix compile error since 0.17.0 with
Option<&SomePyClass>
argument with a default. #2630 - Fix regression of
impl FromPyObject for Vec<T>
no longer accepting types passingPySequence_Check
, e.g. NumPy arrays, since 0.17.0. #2631
0.17.1 - 2022-08-28
Fixed
- Fix visibility of
PyDictItems
,PyDictKeys
, andPyDictValues
types added in PyO3 0.17.0. - Fix compile failure when using
#[pyo3(from_py_with = "...")]
attribute on an argument of typeOption<T>
. #2592 - Fix clippy
redundant-closure
lint on**kwargs
arguments for#[pyfunction]
and#[pymethods]
. #2595
0.17.0 - 2022-08-23
Packaging
- Update inventory dependency to
0.3
(themultiple-pymethods
feature now requires Rust 1.62 for correctness). #2492
Added
- Add
timezone_utc
. #1588 - Implement
ToPyObject
for[T; N]
. #2313 - Add
PyDictKeys
,PyDictValues
andPyDictItems
Rust types. #2358 - Add
append_to_inittab
. #2377 - Add FFI definition
PyFrame_GetCode
. #2406 - Add
PyCode
andPyFrame
high level objects. #2408 - Add FFI definitions
Py_fstring_input
,sendfunc
, and_PyErr_StackItem
. #2423 - Add
PyDateTime::new_with_fold
,PyTime::new_with_fold
,PyTime::get_fold
, andPyDateTime::get_fold
for PyPy. #2428 - Accept
#[pyo3(name)]
on enum variants. #2457 - Add
CompareOp::matches
to implement__richcmp__
as the result of a Ruststd::cmp::Ordering
comparison. #2460 - Add
PySuper
type. #2486 - Support PyPy on Windows with the
generate-import-lib
feature. #2506 - Add FFI definitions
Py_EnterRecursiveCall
andPy_LeaveRecursiveCall
. #2511 - Add
PyDict::get_item_with_error
. #2536 - Add
#[pyclass(sequence)]
option. #2567
Changed
- Change datetime constructors taking a
tzinfo
to takeOption<&PyTzInfo>
instead ofOption<&PyObject>
:PyDateTime::new
,PyDateTime::new_with_fold
,PyTime::new
, andPyTime::new_with_fold
. #1588 - Move
PyTypeObject::type_object
method to thePyTypeInfo
trait, and deprecate thePyTypeObject
trait. #2287 - Methods of
Py
andPyAny
now acceptimpl IntoPy<Py<PyString>>
rather than just&str
to allow use of theintern!
macro. #2312 - Change the deprecated
pyproto
feature to be opt-in instead of opt-out. #2322 - Emit better error messages when
#[pyfunction]
return types do not implementIntoPy
. #2326 - Require
T: IntoPy
forimpl<T, const N: usize> IntoPy<PyObject> for [T; N]
instead ofT: ToPyObject
. #2326 - Deprecate the
ToBorrowedObject
trait. #2333 - Iterators over
PySet
andPyDict
will now panic if the underlying collection is mutated during the iteration. #2380 - Iterators over
PySet
andPyDict
will now panic if the underlying collection is mutated during the iteration. #2380 - Allow
#[classattr]
methods to be fallible. #2385 - Prevent multiple
#[pymethods]
with the same name for a single#[pyclass]
. #2399 - Fixup
lib_name
when usingPYO3_CONFIG_FILE
. #2404 - Add a message to the
ValueError
raised by the#[derive(FromPyObject)]
implementation for a tuple struct. #2414 - Allow
#[classattr]
methods to takePython
argument. #2456 - Rework
PyCapsule
type to resolve soundness issues: #2485PyCapsule::new
andPyCapsule::new_with_destructor
now takename: Option<CString>
instead of&CStr
.- The destructor
F
inPyCapsule::new_with_destructor
must now beSend
. PyCapsule::get_context
deprecated in favour ofPyCapsule::context
which doesn't take apy: Python<'_>
argument.PyCapsule::set_context
no longer takes apy: Python<'_>
argument.PyCapsule::name
now returnsPyResult<Option<&CStr>>
instead of&CStr
.
FromPyObject::extract
forVec<T>
no longer accepts Pythonstr
inputs. #2500- Ensure each
#[pymodule]
is only initialized once. #2523 pyo3_build_config::add_extension_module_link_args
now also emits linker arguments forwasm32-unknown-emscripten
. #2538- Type checks for
PySequence
andPyMapping
now require inputs to inherit from (or register with)collections.abc.Sequence
andcollections.abc.Mapping
respectively. #2477 - Disable
PyFunction
on when building for abi3 or PyPy. #2542 - Deprecate
Python::acquire_gil
. #2549
Removed
- Remove all functionality deprecated in PyO3 0.15. #2283
- Make the
Dict
,WeakRef
andBaseNativeType
members of thePyClass
private implementation details. #2572
Fixed
- Enable incorrectly disabled FFI definition
PyThreadState_DeleteCurrent
. #2357 - Fix
wrap_pymodule
interactions with name resolution rules: it no longer "sees through" glob imports ofuse submodule::*
whensubmodule::submodule
is a#[pymodule]
. #2363 - Correct FFI definition
PyEval_EvalCodeEx
to take*const *mut PyObject
array arguments instead of*mut *mut PyObject
. #2368 - Fix "raw-ident" structs (e.g.
#[pyclass] struct r#RawName
) incorrectly havingr#
at the start of the class name created in Python. #2395 - Correct FFI definition
Py_tracefunc
to beunsafe extern "C" fn
(was previously safe). #2407 - Fix compile failure with
#[pyo3(from_py_with = "...")]
annotations on a field in a#[derive(FromPyObject)]
struct. #2414 - Fix FFI definitions
_PyDateTime_BaseTime
and_PyDateTime_BaseDateTime
lacking leading underscores in their names. #2421 - Remove FFI definition
PyArena
on Python 3.10 and up. #2421 - Fix FFI definition
PyCompilerFlags
missing membercf_feature_version
on Python 3.8 and up. #2423 - Fix FFI definition
PyAsyncMethods
missing memberam_send
on Python 3.10 and up. #2423 - Fix FFI definition
PyGenObject
having multiple incorrect members on various Python versions. #2423 - Fix FFI definition
PySyntaxErrorObject
missing membersend_lineno
andend_offset
on Python 3.10 and up. #2423 - Fix FFI definition
PyHeapTypeObject
missing memberht_module
on Python 3.9 and up. #2423 - Fix FFI definition
PyFrameObject
having multiple incorrect members on various Python versions. #2424 #2434 - Fix FFI definition
PyTypeObject
missing deprecated fieldtp_print
on Python 3.8. #2428 - Fix FFI definitions
PyDateTime_CAPI
.PyDateTime_Date
,PyASCIIObject
,PyBaseExceptionObject
,PyListObject
, andPyTypeObject
on PyPy. #2428 - Fix FFI definition
_inittab
fieldinitfunc
typo'd asinitfun
. #2431 - Fix FFI definitions
_PyDateTime_BaseTime
and_PyDateTime_BaseDateTime
incorrectly havingfold
member. #2432 - Fix FFI definitions
PyTypeObject
.PyHeapTypeObject
, andPyCFunctionObject
having incorrect members on PyPy 3.9. #2433 - Fix FFI definition
PyGetSetDef
to have*const c_char
fordoc
member (not*mut c_char
). #2439 - Fix
#[pyo3(from_py_with = "...")]
being ignored for 1-element tuple structs and transparent structs. #2440 - Use
memoffset
to avoid UB when computingPyCell
layout. #2450 - Fix incorrect enum names being returned by the generated
repr
for enums renamed by#[pyclass(name = "...")]
#2457 - Fix
PyObject_CallNoArgs
incorrectly being available when building for abi3 on Python 3.9. #2476 - Fix several clippy warnings generated by
#[pyfunction]
arguments. #2503
0.16.6 - 2022-08-23
### Changed
- Fix soundness issues with
PyCapsule
type with select workarounds. Users are encourage to upgrade to PyO3 0.17 at their earliest convenience which contains API breakages which fix the issues in a long-term fashion. #2522PyCapsule::new
andPyCapsule::new_with_destructor
now take ownership of a copy of thename
to resolve a possible use-after-free.PyCapsule::name
now returns an emptyCStr
instead of dereferencing a null pointer if the capsule has no name.- The destructor
F
inPyCapsule::new_with_destructor
will never be called if the capsule is deleted from a thread other than the one which the capsule was created in (a warning will be emitted).
- Panics during drop of panic payload caught by PyO3 will now abort. #2544
0.16.5 - 2022-05-15
Added
- Add an experimental
generate-import-lib
feature to support auto-generating non-abi3 python import libraries for Windows targets. #2364 - Add FFI definition
Py_ExitStatusException
. #2374
Changed
- Deprecate experimental
generate-abi3-import-lib
feature in favor of the newgenerate-import-lib
feature. #2364
Fixed
- Added missing
warn_default_encoding
field toPyConfig
on 3.10+. The previously missing field could result in incorrect behavior or crashes. #2370 - Fixed order of
pathconfig_warnings
andprogram_name
fields ofPyConfig
on 3.10+. Previously, the order of the fields was swapped and this could lead to incorrect behavior or crashes. #2370
0.16.4 - 2022-04-14
Added
- Add
PyTzInfoAccess
trait for safe access to time zone information. #2263 - Add an experimental
generate-abi3-import-lib
feature to auto-generatepython3.dll
import libraries for Windows. #2282 - Add FFI definitions for
PyDateTime_BaseTime
andPyDateTime_BaseDateTime
. #2294
Changed
- Improved performance of failing calls to
FromPyObject::extract
which is common when functions accept multiple distinct types. #2279 - Default to "m" ABI tag when choosing
libpython
link name for CPython 3.7 on Unix. #2288 - Allow to compile "abi3" extensions without a working build host Python interpreter. #2293
Fixed
- Crates depending on PyO3 can collect code coverage via LLVM instrumentation using stable Rust. #2286
- Fix segfault when calling FFI methods
PyDateTime_DATE_GET_TZINFO
orPyDateTime_TIME_GET_TZINFO
ondatetime
ortime
without a tzinfo. #2289 - Fix directory names starting with the letter
n
breaking serialization of the interpreter configuration on Windows since PyO3 0.16.3. #2299
0.16.3 - 2022-04-05
Packaging
- Extend
parking_lot
dependency supported versions to include 0.12. #2239
Added
- Add methods to
pyo3_build_config::InterpreterConfig
to run Python scripts using the configured executable. #2092 - Add
as_bytes
method toPy<PyBytes>
. #2235 - Add FFI definitions for
PyType_FromModuleAndSpec
,PyType_GetModule
,PyType_GetModuleState
andPyModule_AddType
. #2250 - Add
pyo3_build_config::cross_compiling_from_to
as a helper to detect when PyO3 is cross-compiling. #2253 - Add
#[pyclass(mapping)]
option to leave sequence slots empty in container implementations. #2265 - Add
PyString::intern
to enable usage of the Python's built-in string interning. #2268 - Add
intern!
macro which can be used to amortize the cost of creating Python strings by storing them inside aGILOnceCell
. #2269 - Add
PYO3_CROSS_PYTHON_IMPLEMENTATION
environment variable for selecting the default cross Python implementation. #2272
Changed
- Allow
#[pyo3(crate = "...", text_signature = "...")]
options to be used directly in#[pyclass(crate = "...", text_signature = "...")]
. #2234 - Make
PYO3_CROSS_LIB_DIR
environment variable optional when cross compiling. #2241 - Mark
METH_FASTCALL
calling convention as limited API on Python 3.10. #2250 - Deprecate
pyo3_build_config::cross_compiling
in favour ofpyo3_build_config::cross_compiling_from_to
. #2253
Fixed
- Fix
abi3-py310
feature: use Python 3.10 ABI when available instead of silently falling back to the 3.9 ABI. #2242 - Use shared linking mode when cross compiling against a Framework bundle for macOS. #2233
- Fix panic during compilation when
PYO3_CROSS_LIB_DIR
is set for some host/target combinations. #2232 - Correct dependency version for
syn
to require minimal patch version 1.0.56. #2240
0.16.2 - 2022-03-15
Packaging
- Warn when modules are imported on PyPy 3.7 versions older than PyPy 7.3.8, as they are known to have binary compatibility issues. #2217
- Ensure build script of
pyo3-ffi
runs before that ofpyo3
to fix cross compilation. #2224
0.16.1 - 2022-03-05
Packaging
- Extend
hashbrown
optional dependency supported versions to include 0.12. #2197
Fixed
- Fix incorrect platform detection for Windows in
pyo3-build-config
. #2198 - Fix regression from 0.16 preventing cross compiling to aarch64 macOS. #2201
0.16.0 - 2022-02-27
Packaging
- Update MSRV to Rust 1.48. #2004
- Update
indoc
optional dependency to 1.0. #2004 - Drop support for Python 3.6, remove
abi3-py36
feature. #2006 pyo3-build-config
no longer enables theresolve-config
feature by default. #2008- Update
inventory
optional dependency to 0.2. #2019 - Drop
paste
dependency. #2081 - The bindings found in
pyo3::ffi
are now a re-export of a separatepyo3-ffi
crate. #2126 - Support PyPy 3.9. #2143
Added
- Add
PyCapsule
type exposing the Capsule API. #1980 - Add
pyo3_build_config::Sysconfigdata
and supporting APIs. #1996 - Add
Py::setattr
method. #2009 - Add
#[pyo3(crate = "some::path")]
option to all attribute macros (except the deprecated#[pyproto]
). #2022 - Enable
create_exception!
macro to take an optional docstring. #2027 - Enable
#[pyclass]
for fieldless (aka C-like) enums. #2034 - Add buffer magic methods
__getbuffer__
and__releasebuffer__
to#[pymethods]
. #2067 - Add support for paths in
wrap_pyfunction
andwrap_pymodule
. #2081 - Enable
wrap_pyfunction!
to wrap a#[pyfunction]
implemented in a different Rust module or crate. #2091 - Add
PyAny::contains
method (in
operator forPyAny
). #2115 - Add
PyMapping::contains
method (in
operator forPyMapping
). #2133 - Add garbage collection magic magic methods
__traverse__
and__clear__
to#[pymethods]
. #2159 - Add support for
from_py_with
on struct tuples and enums to override the default from-Python conversion. #2181 - Add
eq
,ne
,lt
,le
,gt
,ge
methods toPyAny
that wraprich_compare
. #2175 - Add
Py::is
andPyAny::is
methods to check for object identity. #2183 - Add support for the
__getattribute__
magic method. #2187
Changed
PyType::is_subclass
,PyErr::is_instance
andPyAny::is_instance
now operate run-time type object instead of a type known at compile-time. The old behavior is still available asPyType::is_subclass_of
,PyErr::is_instance_of
andPyAny::is_instance_of
. #1985- Rename some methods on
PyErr
(the old names are just marked deprecated for now): #2026pytype
->get_type
pvalue
->value
(and deprecate equivalentinstance
)ptraceback
->traceback
from_instance
->from_value
into_instance
->into_value
PyErr::new_type
now takes an optional docstring and now returnsPyResult<Py<PyType>>
rather than affi::PyTypeObject
pointer. #2027- Deprecate
PyType::is_instance
; it is inconsistent with otheris_instance
methods in PyO3. Instead oftyp.is_instance(obj)
, useobj.is_instance(typ)
. #2031 __getitem__
,__setitem__
and__delitem__
in#[pymethods]
now implement both a Python mapping and sequence by default. #2065- Improve performance and error messages for
#[derive(FromPyObject)]
for enums. #2068 - Reduce generated LLVM code size (to improve compile times) for:
- Respect Rust privacy rules for items wrapped with
wrap_pyfunction
andwrap_pymodule
. #2081 - Add modulo argument to
__ipow__
magic method. #2083 - Fix FFI definition for
_PyCFunctionFast
. #2126 PyDateTimeAPI
andPyDateTime_TimeZone_UTC
are are now unsafe functions instead of statics. #2126PyDateTimeAPI
does not implicitly callPyDateTime_IMPORT
anymore to reflect the original Python API more closely. Before the first call toPyDateTime_IMPORT
a null pointer is returned. Therefore before calling any of the following FFI functionsPyDateTime_IMPORT
must be called to avoid undefined behaviour: #2126PyDateTime_TimeZone_UTC
PyDate_Check
PyDate_CheckExact
PyDateTime_Check
PyDateTime_CheckExact
PyTime_Check
PyTime_CheckExact
PyDelta_Check
PyDelta_CheckExact
PyTZInfo_Check
PyTZInfo_CheckExact
PyDateTime_FromTimestamp
PyDate_FromTimestamp
- Deprecate the
gc
option forpyclass
(e.g.#[pyclass(gc)]
). Just implement a__traverse__
#[pymethod]
. #2159 - The
ml_meth
field ofPyMethodDef
is now represented by thePyMethodDefPointer
union. 2166 - Deprecate the
#[pyproto]
traits. #2173
Removed
- Remove all functionality deprecated in PyO3 0.14. #2007
- Remove
Default
impl forPyMethodDef
. #2166 - Remove
PartialEq
impl forPy
andPyAny
(use the newis
instead). #2183
Fixed
- Fix undefined symbol for
PyObject_HasAttr
on PyPy. #2025 - Fix memory leak in
PyErr::into_value
. #2026 - Fix clippy warning
needless-option-as-deref
in code generated by#[pyfunction]
and#[pymethods]
. #2040 - Fix undefined behavior in
PySlice::indices
. #2061 - Fix the
wrap_pymodule!
macro using the wrong name for a#[pymodule]
with a#[pyo3(name = "..")]
attribute. #2081 - Fix magic methods in
#[pymethods]
accepting implementations with the wrong number of arguments. #2083 - Fix panic in
#[pyfunction]
generated code when a required argument following anOption
was not provided. #2093 - Fixed undefined behaviour caused by incorrect
ExactSizeIterator
implementations. #2124 - Fix missing FFI definition
PyCMethod_New
on Python 3.9 and up. #2143 - Add missing FFI definitions
_PyLong_NumBits
and_PyLong_AsByteArray
on PyPy. #2146 - Fix memory leak in implementation of
AsPyPointer
forOption<T>
. #2160 - Fix FFI definition of
_PyLong_NumBits
to returnsize_t
instead ofc_int
. #2161 - Fix
TypeError
thrown when argument parsing failed missing the originating causes. 2177
0.15.2 - 2022-04-14
Packaging
- Backport of PyPy 3.9 support from PyO3 0.16. #2262
0.15.1 - 2021-11-19
Added
- Add implementations for
Py::as_ref
andPy::into_ref
forPy<PySequence>
,Py<PyIterator>
andPy<PyMapping>
. #1682 - Add
PyTraceback
type to represent and format Python tracebacks. #1977
Changed
#[classattr]
constants with a known magic method name (which is lowercase) no longer trigger lint warnings expecting constants to be uppercase. #1969
Fixed
- Fix creating
#[classattr]
by functions with the name of a known magic method. #1969 - Fix use of
catch_unwind
inallow_threads
which can cause fatal crashes. #1989 - Fix build failure on PyPy when abi3 features are activated. #1991
- Fix mingw platform detection. #1993
- Fix panic in
__get__
implementation when accessing descriptor on type object. #1997
0.15.0 - 2021-11-03
Packaging
pyo3
'sCargo.toml
now advertiseslinks = "python"
to inform Cargo that it links against libpython. #1819- Added optional
anyhow
feature to convertanyhow::Error
intoPyErr
. #1822 - Support Python 3.10. #1889
- Added optional
eyre
feature to converteyre::Report
intoPyErr
. #1893 - Support PyPy 3.8. #1948
Added
- Add
PyList::get_item_unchecked
andPyTuple::get_item_unchecked
to get items without bounds checks. #1733 - Support
#[doc = include_str!(...)]
attributes on Rust 1.54 and up. #1746 - Add
PyAny::py
as a convenience forPyNativeType::py
. #1751 - Add implementation of
std::ops::Index<usize>
forPyList
,PyTuple
andPySequence
. #1825 - Add range indexing implementations of
std::ops::Index
forPyList
,PyTuple
andPySequence
. #1829 - Add
PyMapping
type to represent the Python mapping protocol. #1844 - Add commonly-used sequence methods to
PyList
andPyTuple
. #1849 - Add
as_sequence
methods toPyList
andPyTuple
. #1860 - Add support for magic methods in
#[pymethods]
, intended as a replacement for#[pyproto]
. #1864 - Add
abi3-py310
feature. #1889 - Add
PyCFunction::new_closure
to create a Python function from a Rust closure. #1901 - Add support for positional-only arguments in
#[pyfunction]
. #1925 - Add
PyErr::take
to attempt to fetch a Python exception if present. #1957
Changed
PyList
,PyTuple
andPySequence
's APIs now accepts onlyusize
indices instead ofisize
. #1733, #1802, #1803PyList::get_item
andPyTuple::get_item
now returnPyResult<&PyAny>
instead of panicking. #1733PySequence::in_place_repeat
andPySequence::in_place_concat
now returnPyResult<&PySequence>
instead ofPyResult<()>
, which is needed in case of immutable sequences such as tuples. #1803PySequence::get_slice
now returnsPyResult<&PySequence>
instead ofPyResult<&PyAny>
. #1829- Deprecate
PyTuple::split_from
. #1804 - Deprecate
PyTuple::slice
, new methodPyTuple::get_slice
added withusize
indices. #1828 - Deprecate FFI definitions
PyParser_SimpleParseStringFlags
,PyParser_SimpleParseStringFlagsFilename
,PyParser_SimpleParseFileFlags
when building for Python 3.9. #1830 - Mark FFI definitions removed in Python 3.10
PyParser_ASTFromString
,PyParser_ASTFromStringObject
,PyParser_ASTFromFile
,PyParser_ASTFromFileObject
,PyParser_SimpleParseStringFlags
,PyParser_SimpleParseStringFlagsFilename
,PyParser_SimpleParseFileFlags
,PyParser_SimpleParseString
,PyParser_SimpleParseFile
,Py_SymtableString
, andPy_SymtableStringObject
. #1830 #[pymethods]
now handles magic methods similarly to#[pyproto]
. In the future,#[pyproto]
may be deprecated. #1864- Deprecate FFI definitions
PySys_AddWarnOption
,PySys_AddWarnOptionUnicode
andPySys_HasWarnOptions
. #1887 - Deprecate
#[call]
attribute in favor of usingfn __call__
. #1929 - Fix missing FFI definition
_PyImport_FindExtensionObject
on Python 3.10. #1942 - Change
PyErr::fetch
to panic in debug mode if no exception is present. #1957
Fixed
- Fix building with a conda environment on Windows. #1873
- Fix panic on Python 3.6 when calling
Python::with_gil
with Python initialized but threading not initialized. #1874 - Fix incorrect linking to version-specific DLL instead of
python3.dll
when cross-compiling to Windows withabi3
. #1880 - Fix FFI definition for
PyTuple_ClearFreeList
incorrectly being present for Python 3.9 and up. #1887 - Fix panic in generated
#[derive(FromPyObject)]
for enums. #1888 - Fix cross-compiling to Python 3.7 builds with the "m" abi flag. #1908
- Fix
__mod__
magic method fallback to__rmod__
. #1934. - Fix missing FFI definition
_PyImport_FindExtensionObject
on Python 3.10. #1942
0.14.5 - 2021-09-05
Added
- Make
pyo3_build_config::InterpreterConfig
and subfields public. #1848 - Add
resolve-config
feature to thepyo3-build-config
to control whether its build script does anything. #1856
Fixed
- Fix 0.14.4 compile regression on
s390x-unknown-linux-gnu
target. #1850
0.14.4 - 2021-08-29
Changed
- Mark
PyString::data
asunsafe
and disable it and some supporting PyUnicode FFI APIs (which depend on a C bitfield) on big-endian targets. #1834
0.14.3 - 2021-08-22
Added
- Add
PyString::data
to access the raw bytes stored in a Python string. #1794
Fixed
- Raise
AttributeError
to avoid panic when callingdel
on a#[setter]
defined class property. #1779 - Restrict FFI definitions
PyGILState_Check
andPy_tracefunc
to the unlimited API. #1787 - Add missing
_type
field toPyStatus
struct definition. #1791 - Reduce lower bound
num-complex
optional dependency to support interop withrust-numpy
andndarray
when building with the MSRV of 1.41 #1799 - Fix memory leak in
Python::run_code
. #1806 - Fix memory leak in
PyModule::from_code
. #1810 - Remove use of
pyo3::
inpyo3::types::datetime
which broke builds using-Z avoid-dev-deps
#1811
0.14.2 - 2021-08-09
Added
- Add
indexmap
feature to addToPyObject
,IntoPy
andFromPyObject
implementations forindexmap::IndexMap
. #1728 - Add
pyo3_build_config::add_extension_module_link_args
to use in build scripts to set linker arguments (for macOS). #1755 - Add
Python::with_gil_unchecked
unsafe variation ofPython::with_gil
to allow obtaining aPython
in scenarios wherePython::with_gil
would fail. #1769
Changed
PyErr::new
no longer acquires the Python GIL internally. #1724- Reverted PyO3 0.14.0's use of
cargo:rustc-cdylib-link-arg
in its build script, as Cargo unintentionally allowed crates to pass linker args to downstream crates in this way. Projects supporting macOS may need to restore.cargo/config.toml
files. #1755
Fixed
- Fix regression in 0.14.0 rejecting usage of
#[doc(hidden)]
on structs and functions annotated with PyO3 macros. #1722 - Fix regression in 0.14.0 leading to incorrect code coverage being computed for
#[pyfunction]
s. #1726 - Fix incorrect FFI definition of
Py_Buffer
on PyPy. #1737 - Fix incorrect calculation of
dictoffset
on 32-bit Windows. #1475 - Fix regression in 0.13.2 leading to linking to incorrect Python library on Windows "gnu" targets. #1759
- Fix compiler warning: deny trailing semicolons in expression macro. #1762
- Fix incorrect FFI definition of
Py_DecodeLocale
. The 2nd argument is now*mut Py_ssize_t
instead ofPy_ssize_t
. #1766
0.14.1 - 2021-07-04
Added
- Implement
IntoPy<PyObject>
for&PathBuf
and&OsString
. #1712
Fixed
- Fix crashes on PyPy due to incorrect definitions of
PyList_SET_ITEM
. #1713
0.14.0 - 2021-07-03
Packaging
- Update
num-bigint
optional dependency to 0.4. #1481 - Update
num-complex
optional dependency to 0.4. #1482 - Extend
hashbrown
optional dependency supported versions to include 0.11. #1496 - Support PyPy 3.7. #1538
Added
- Extend conversions for
[T; N]
to allN
using const generics (on Rust 1.51 and up). #1128 - Add conversions between
OsStr
/OsString
and Python strings. #1379 - Add conversions between
Path
/PathBuf
and Python strings (andpathlib.Path
objects). #1379 #1654 - Add a new set of
#[pyo3(...)]
attributes to control various PyO3 macro functionality: - Add FFI definition
PyCFunction_CheckExact
for Python 3.9 and later. #1425 - Add FFI definition
Py_IS_TYPE
. #1429 - Add FFI definition
_Py_InitializeMain
. #1473 - Add FFI definitions from
cpython/import.h
.#1475 - Add tuple and unit struct support for
#[pyclass]
macro. #1504 - Add FFI definition
PyDateTime_TimeZone_UTC
. #1572 - Add support for
#[pyclass(extends=Exception)]
. #1591 - Add
PyErr::cause
andPyErr::set_cause
. #1679 - Add FFI definitions from
cpython/pystate.h
. #1687 - Add
wrap_pyfunction!
macro topyo3::prelude
. #1695
Changed
- Allow only one
#[pymethods]
block per#[pyclass]
by default, to remove the dependency oninventory
. Add amultiple-pymethods
feature to opt-in the original behavior and dependency oninventory
. #1457 - Change
PyTimeAccess::get_fold
to return abool
instead of au8
. #1397 - Deprecate FFI definition
PyCFunction_Call
for Python 3.9 and up. #1425 - Deprecate FFI definition
PyModule_GetFilename
. #1425 - The
auto-initialize
feature is no longer enabled by default. #1443 - Change
PyCFunction::new
andPyCFunction::new_with_keywords
to take&'static str
arguments rather than implicitly copying (and leaking) them. #1450 - Deprecate
PyModule::call
,PyModule::call0
,PyModule::call1
andPyModule::get
. #1492 - Add length information to
PyBufferError
s raised fromPyBuffer::copy_to_slice
andPyBuffer::copy_from_slice
. #1534 - Automatically set
-undefined
anddynamic_lookup
linker arguments on macOS with theextension-module
feature. #1539 - Deprecate
#[pyproto]
methods which are easier to implement as#[pymethods]
: #1560PyBasicProtocol::__bytes__
andPyBasicProtocol::__format__
PyContextProtocol::__enter__
andPyContextProtocol::__exit__
PyDescrProtocol::__delete__
andPyDescrProtocol::__set_name__
PyMappingProtocol::__reversed__
PyNumberProtocol::__complex__
andPyNumberProtocol::__round__
PyAsyncProtocol::__aenter__
andPyAsyncProtocol::__aexit__
- Deprecate several attributes in favor of the new
#[pyo3(...)]
options: - Reduce LLVM line counts to improve compilation times. #1604
- No longer call
PyEval_InitThreads
in#[pymodule]
init code. #1630 - Use
METH_FASTCALL
argument passing convention, when possible, to improve#[pyfunction]
and method performance. #1619, #1660 - Filter sysconfigdata candidates by architecture when cross-compiling. #1626
Removed
- Remove deprecated exception names
BaseException
etc. #1426 - Remove deprecated methods
Python::is_instance
,Python::is_subclass
,Python::release
,Python::xdecref
, andPy::from_owned_ptr_or_panic
. #1426 - Remove many FFI definitions which never existed in the Python C-API:
- Remove pyclass implementation details from
PyTypeInfo
: - Remove
PYO3_CROSS_INCLUDE_DIR
environment variable and the associated C header parsing functionality. #1521 - Remove
raw_pycfunction!
macro. #1619 - Remove
PyClassAlloc
trait. #1657 - Remove
PyList::get_parked_item
. #1664
Fixed
- Remove FFI definition
PyCFunction_ClearFreeList
for Python 3.9 and later. #1425 PYO3_CROSS_LIB_DIR
environment variable no long required when compiling for x86-64 Python from macOS arm64 and reverse. #1428- Fix FFI definition
_PyEval_RequestCodeExtraIndex
, which took an argument of the wrong type. #1429 - Fix FFI definition
PyIndex_Check
missing with theabi3
feature. #1436 - Fix incorrect
TypeError
raised when keyword-only argument passed along with a positional argument in*args
. #1440 - Fix inability to use a named lifetime for
&PyTuple
of*args
in#[pyfunction]
. #1440 - Fix use of Python argument for
#[pymethods]
inside macro expansions. #1505 - No longer include
__doc__
in__all__
generated for#[pymodule]
. #1509 - Always use cross-compiling configuration if any of the
PYO3_CROSS
family of environment variables are set. #1514 - Support
EnvironmentError
,IOError
, andWindowsError
on PyPy. #1533 - Fix unnecessary rebuilds when cycling between
cargo check
andcargo clippy
in a Python virtualenv. #1557 - Fix segfault when dereferencing
ffi::PyDateTimeAPI
without the GIL. #1563 - Fix memory leak in
FromPyObject
implementations foru128
andi128
. #1638 - Fix
#[pyclass(extends=PyDict)]
leaking the dict contents on drop. #1657 - Fix segfault when calling
PyList::get_item
with negative indices. #1668 - Fix FFI definitions of
PyEval_SetProfile
/PyEval_SetTrace
to takeOption<Py_tracefunc>
parameters. #1692 - Fix
ToPyObject
impl forHashSet
to accept non-default hashers. #1702
0.13.2 - 2021-02-12
Packaging
- Lower minimum supported Rust version to 1.41. #1421
Added
- Add unsafe API
with_embedded_python_interpreter
to initialize a Python interpreter, execute a closure, and finalize the interpreter. #1355 - Add
serde
feature which provides implementations ofSerialize
andDeserialize
forPy<T>
. #1366 - Add FFI definition
_PyCFunctionFastWithKeywords
on Python 3.7 and up. #1384 - Add
PyDateTime::new_with_fold
method. #1398 - Add
size_hint
impls for{PyDict,PyList,PySet,PyTuple}Iterator
s. #1699
Changed
prepare_freethreaded_python
will no longer register anatexit
handler to callPy_Finalize
. This resolves a number of issues with incompatible C extensions causing crashes at finalization. #1355- Mark
PyLayout::py_init
,PyClassDict::clear_dict
, andopt_to_pyobj
safe, as they do not perform any unsafe operations. #1404
Fixed
- Fix support for using
r#raw_idents
as argument names in pyfunctions. #1383 - Fix typo in FFI definition for
PyFunction_GetCode
(was incorrectlyPyFunction_Code
). #1387 - Fix FFI definitions
PyMarshal_WriteObjectToString
andPyMarshal_ReadObjectFromString
as available in limited API. #1387 - Fix FFI definitions
PyListObject
and those fromfuncobject.h
as requiring non-limited API. #1387 - Fix unqualified
Result
usage inpyobject_native_type_base
. #1402 - Fix build on systems where the default Python encoding is not UTF-8. #1405
- Fix build on mingw / MSYS2. #1423
0.13.1 - 2021-01-10
Added
- Add support for
#[pyclass(dict)]
and#[pyclass(weakref)]
with theabi3
feature on Python 3.9 and up. #1342 - Add FFI definitions
PyOS_BeforeFork
,PyOS_AfterFork_Parent
,PyOS_AfterFork_Child
for Python 3.7 and up. #1348 - Add an
auto-initialize
feature to control whether PyO3 should automatically initialize an embedded Python interpreter. For compatibility this feature is enabled by default in PyO3 0.13.1, but is planned to become opt-in from PyO3 0.14.0. #1347 - Add support for cross-compiling to Windows without needing
PYO3_CROSS_INCLUDE_DIR
. #1350
Deprecated
- Deprecate FFI definitions
PyEval_CallObjectWithKeywords
,PyEval_CallObject
,PyEval_CallFunction
,PyEval_CallMethod
when building for Python 3.9. #1338 - Deprecate FFI definitions
PyGetSetDef_DICT
andPyGetSetDef_INIT
which have never been in the Python API. #1341 - Deprecate FFI definitions
PyGen_NeedsFinalizing
,PyImport_Cleanup
(removed in 3.9), andPyOS_InitInterrupts
(3.10). #1348 - Deprecate FFI definition
PyOS_AfterFork
for Python 3.7 and up. #1348 - Deprecate FFI definitions
PyCoro_Check
,PyAsyncGen_Check
, andPyCoroWrapper_Check
, which have never been in the Python API (for the first two, it is possible to usePyCoro_CheckExact
andPyAsyncGen_CheckExact
instead; these are the actual functions provided by the Python API). #1348 - Deprecate FFI definitions for
PyUnicode_FromUnicode
,PyUnicode_AsUnicode
andPyUnicode_AsUnicodeAndSize
, which will be removed from 3.12 and up due to PEP 623. #1370
Removed
- Remove FFI definition
PyFrame_ClearFreeList
when building for Python 3.9. #1341 - Remove FFI definition
_PyDict_Contains
when building for Python 3.10. #1341 - Remove FFI definitions
PyGen_NeedsFinalizing
andPyImport_Cleanup
(for 3.9 and up), andPyOS_InitInterrupts
(3.10). #1348
Fixed
- Stop including
Py_TRACE_REFS
config setting automatically ifPy_DEBUG
is set on Python 3.8 and up. #1334 - Remove
#[deny(warnings)]
attribute (and instead refuse warnings only in CI). #1340 - Fix deprecation warning for missing
__module__
with#[pyclass]
. #1343 - Correct return type of
PyFrozenSet::empty
to&PyFrozenSet
(was incorrectly&PySet
). #1351 - Fix missing
Py_INCREF
on heap type objects on Python versions before 3.8. #1365
0.13.0 - 2020-12-22
Packaging
- Drop support for Python 3.5 (as it is now end-of-life). #1250
- Bump minimum supported Rust version to 1.45. #1272
- Bump indoc dependency to 1.0. #1272
- Bump paste dependency to 1.0. #1272
- Rename internal crates
pyo3cls
andpyo3-derive-backend
topyo3-macros
andpyo3-macros-backend
respectively. #1317
Added
- Add support for building for CPython limited API. Opting-in to the limited API enables a single extension wheel built with PyO3 to be installable on multiple Python versions. This required a few minor changes to runtime behaviour of of PyO3
#[pyclass]
types. See the migration guide for full details. #1152- Add feature flags
abi3-py36
,abi3-py37
,abi3-py38
etc. to set the minimum Python version when using the limited API. #1263
- Add feature flags
- Add argument names to
TypeError
messages generated by pymethod wrappers. #1212 - Add FFI definitions for PEP 587 "Python Initialization Configuration". #1247
- Add FFI definitions for
PyEval_SetProfile
andPyEval_SetTrace
. #1255 - Add FFI definitions for context.h functions (
PyContext_New
, etc). #1259 - Add
PyAny::is_instance
method. #1276 - Add support for conversion between
char
andPyString
. #1282 - Add FFI definitions for
PyBuffer_SizeFromFormat
,PyObject_LengthHint
,PyObject_CallNoArgs
,PyObject_CallOneArg
,PyObject_CallMethodNoArgs
,PyObject_CallMethodOneArg
,PyObject_VectorcallDict
, andPyObject_VectorcallMethod
. #1287 - Add conversions between
u128
/i128
andPyLong
for PyPy. #1310 - Add
Python::version
andPython::version_info
to get the running interpreter version. #1322 - Add conversions for tuples of length 10, 11, and 12. #1454
Changed
- Change return type of
PyType::name
fromCow<str>
toPyResult<&str>
. #1152 #[pyclass(subclass)]
is now required for subclassing from Rust (was previously just required for subclassing from Python). #1152- Change
PyIterator
to be consistent with other native types: it is now used as&PyIterator
instead ofPyIterator<'a>
. #1176 - Change formatting of
PyDowncastError
messages to be closer to Python's builtin error messages. #1212 - Change
Debug
andDisplay
impls forPyException
to be consistent withPyAny
. #1275 - Change
Debug
impl ofPyErr
to output more helpful information (acquiring the GIL if necessary). #1275 - Rename
PyTypeInfo::is_instance
andPyTypeInfo::is_exact_instance
toPyTypeInfo::is_type_of
andPyTypeInfo::is_exact_type_of
. #1278 - Optimize
PyAny::call0
,Py::call0
andPyAny::call_method0
andPy::call_method0
on Python 3.9 and up. #1287 - Require double-quotes for pyclass name argument e.g
#[pyclass(name = "MyClass")]
. #1303
Deprecated
- Deprecate
Python::is_instance
,Python::is_subclass
,Python::release
, andPython::xdecref
. #1292
Removed
- Remove deprecated ffi definitions
PyUnicode_AsUnicodeCopy
,PyUnicode_GetMax
,_Py_CheckRecursionLimit
,PyObject_AsCharBuffer
,PyObject_AsReadBuffer
,PyObject_CheckReadBuffer
andPyObject_AsWriteBuffer
, which will be removed in Python 3.10. #1217 - Remove unused
python3
feature. #1235
Fixed
- Fix missing field in
PyCodeObject
struct (co_posonlyargcount
) - caused invalid access to other fields in Python >3.7. #1260 - Fix building for
x86_64-unknown-linux-musl
target fromx86_64-unknown-linux-gnu
host. #1267 - Fix
#[text_signature]
interacting badly with rustr#raw_identifiers
. #1286 - Fix FFI definitions for
PyObject_Vectorcall
andPyVectorcall_Call
. #1287 - Fix building with Anaconda python inside a virtualenv. #1290
- Fix definition of opaque FFI types. #1312
- Fix using custom error type in pyclass
#[new]
methods. #1319
0.12.4 - 2020-11-28
Fixed
- Fix reference count bug in implementation of
From<Py<T>>
forPyObject
, a regression introduced in PyO3 0.12. #1297
0.12.3 - 2020-10-12
Fixed
- Fix support for Rust versions 1.39 to 1.44, broken by an incorrect internal update to paste 1.0 which was done in PyO3 0.12.2. #1234
0.12.2 - 2020-10-12
Added
- Add support for keyword-only arguments without default values in
#[pyfunction]
. #1209 - Add
Python::check_signals
as a safe a wrapper forPyErr_CheckSignals
. #1214
Fixed
- Fix invalid document for protocol methods. #1169
- Hide docs of PyO3 private implementation details in
pyo3::class::methods
. #1169 - Fix unnecessary rebuild on PATH changes when the python interpreter is provided by PYO3_PYTHON. #1231
0.12.1 - 2020-09-16
Fixed
- Fix building for a 32-bit Python on 64-bit Windows with a 64-bit Rust toolchain. #1179
- Fix building on platforms where
c_char
isu8
. #1182
0.12.0 - 2020-09-12
Added
- Add FFI definitions
Py_FinalizeEx
,PyOS_getsig
, andPyOS_setsig
. #1021 - Add
PyString::to_str
for accessingPyString
as&str
. #1023 - Add
Python::with_gil
for executing a closure with the Python GIL. #1037 - Add type information to failures in
PyAny::downcast
. #1050 - Implement
Debug
forPyIterator
. #1051 - Add
PyBytes::new_with
andPyByteArray::new_with
for initialisingbytes
andbytearray
objects using a closure. #1074 - Add
#[derive(FromPyObject)]
macro for enums and structs. #1065 - Add
Py::as_ref
andPy::into_ref
for convertingPy<T>
to&T
. #1098 - Add ability to return
Result
types other thanPyResult
from#[pyfunction]
,#[pymethod]
and#[pyproto]
functions. #1106. - Implement
ToPyObject
,IntoPy
, andFromPyObject
for hashbrown'sHashMap
andHashSet
types (requires thehashbrown
feature). #1114 - Add
#[pyfunction(pass_module)]
and#[pyfn(pass_module)]
to pass the module object as the first function argument. #1143 - Add
PyModule::add_function
andPyModule::add_submodule
as typed alternatives toPyModule::add_wrapped
. #1143 - Add native
PyCFunction
andPyFunction
types. #1163
Changed
- Rework exception types: #1024 #1115
- Rename exception types from e.g.
RuntimeError
toPyRuntimeError
. The old names continue to exist but are deprecated. - Exception objects are now accessible as
&T
orPy<T>
, just like other Python-native types. - Rename
PyException::py_err
toPyException::new_err
. - Rename
PyUnicodeDecodeErr::new_err
toPyUnicodeDecodeErr::new
. - Remove
PyStopIteration::stop_iteration
.
- Rename exception types from e.g.
- Require
T: Send
for the return valueT
ofPython::allow_threads
. #1036 - Rename
PYTHON_SYS_EXECUTABLE
toPYO3_PYTHON
. The old name will continue to work (undocumented) but will be removed in a future release. #1039 - Remove
unsafe
from signature ofPyType::as_type_ptr
. #1047 - Change return type of
PyIterator::from_object
toPyResult<PyIterator>
(wasResult<PyIterator, PyDowncastError>
). #1051 IntoPy
is no longer implied byFromPy
. #1063- Change
PyObject
to be a type alias forPy<PyAny>
. #1063 - Rework
PyErr
to be compatible with thestd::error::Error
trait: #1067 #1115- Implement
Display
,Error
,Send
andSync
forPyErr
andPyErrArguments
. - Add
PyErr::instance
for accessingPyErr
as&PyBaseException
. PyErr
's fields are now an implementation detail. The equivalent values can be accessed withPyErr::ptype
,PyErr::pvalue
andPyErr::ptraceback
.- Change receiver of
PyErr::print
andPyErr::print_and_set_sys_last_vars
to&self
(wasself
). - Remove
PyErrValue
,PyErr::from_value
,PyErr::into_normalized
, andPyErr::normalize
. - Remove
PyException::into
. - Remove
Into<PyResult<T>>
forPyErr
andPyException
.
- Implement
- Change methods generated by
#[pyproto]
to returnNotImplemented
if Python should try a reversed operation. #1072 - Change argument to
PyModule::add
toimpl IntoPy<PyObject>
(wasimpl ToPyObject
). #1124
Removed
- Remove many exception and
PyErr
APIs; see the "changed" section above. #1024 #1067 #1115 - Remove
PyString::to_string
(use newPyString::to_str
). #1023 - Remove
PyString::as_bytes
. #1023 - Remove
Python::register_any
. #1023 - Remove
GILGuard::acquire
from the public API. UsePython::acquire_gil
orPython::with_gil
. #1036 - Remove the
FromPy
trait. #1063 - Remove the
AsPyRef
trait. #1098
Fixed
- Correct FFI definitions
Py_SetProgramName
andPy_SetPythonHome
to take*const
arguments (was*mut
). #1021 - Fix
FromPyObject
fornum_bigint::BigInt
for Python objects with an__index__
method. #1027 - Correct FFI definition
_PyLong_AsByteArray
to take*mut c_uchar
argument (was*const c_uchar
). #1029 - Fix segfault with
#[pyclass(dict, unsendable)]
. #1058 #1059 - Fix using
&Self
as an argument type for functions in a#[pymethods]
block. #1071 - Fix best-effort build against PyPy 3.6. #1092
- Fix many cases of lifetime elision in
#[pyproto]
implementations. #1093 - Fix detection of Python build configuration when cross-compiling. #1095
- Always link against libpython on android with the
extension-module
feature. #1095 - Fix the
+
operator not trying__radd__
when both__add__
and__radd__
are defined inPyNumberProtocol
(and similar for all other reversible operators). #1107 - Fix building with Anaconda python. #1175
0.11.1 - 2020-06-30
Added
#[pyclass(unsendable)]
. #1009
Changed
- Update
parking_lot
dependency to0.11
. #1010
0.11.0 - 2020-06-28
Added
- Support stable versions of Rust (>=1.39). #969
- Add FFI definition
PyObject_AsFileDescriptor
. #938 - Add
PyByteArray::data
,PyByteArray::as_bytes
, andPyByteArray::as_bytes_mut
. #967 - Add
GILOnceCell
to use in situations wherelazy_static
oronce_cell
can deadlock. #975 - Add
Py::borrow
,Py::borrow_mut
,Py::try_borrow
, andPy::try_borrow_mut
for accessing#[pyclass]
values. #976 - Add
IterNextOutput
andIterANextOutput
for returning from__next__
/__anext__
. #997
Changed
- Simplify internals of
#[pyo3(get)]
attribute. (Remove the hidden APIGetPropertyValue
.) #934 - Call
Py_Finalize
at exit to flush buffers, etc. #943 - Add type parameter to PyBuffer. #951
- Require
Send
bound for#[pyclass]
. #966 - Add
Python
argument to most methods onPyObject
andPy<T>
to ensure GIL safety. #970 - Change signature of
PyTypeObject::type_object
- now takesPython
argument and returns&PyType
. #970 - Change return type of
PyTuple::slice
andPyTuple::split_from
fromPy<PyTuple>
to&PyTuple
. #970 - Change return type of
PyTuple::as_slice
to&[&PyAny]
. #971 - Rename
PyTypeInfo::type_object
totype_object_raw
, and addPython
argument. #975 - Update
num-complex
optional dependendency from0.2
to0.3
. #977 - Update
num-bigint
optional dependendency from0.2
to0.3
. #978 #[pyproto]
is re-implemented without specialization. #961PyClassAlloc::alloc
is renamed toPyClassAlloc::new
. #990#[pyproto]
methods can now have return valueT
orPyResult<T>
(previously onlyPyResult<T>
was supported). #996#[pyproto]
methods can now skip annotating the return type if it is()
. #998
Removed
- Remove
ManagedPyRef
(unused, and needs specialization) #930
Fixed
- Fix passing explicit
None
toOption<T>
argument#[pyfunction]
with a default value. #936 - Fix
PyClass.__new__
's not respecting subclasses when inherited by a Python class. #990 - Fix returning
Option<T>
from#[pyproto]
methods. #996 - Fix accepting
PyRef<Self>
andPyRefMut<Self>
to#[getter]
and#[setter]
methods. #999
0.10.1 - 2020-05-14
Fixed
- Fix deadlock in
Python::acquire_gil
after dropping aPyObject
orPy<T>
. #924
0.10.0 - 2020-05-13
Added
- Add FFI definition
_PyDict_NewPresized
. #849 - Implement
IntoPy<PyObject>
forHashSet
andBTreeSet
. #864 - Add
PyAny::dir
method. #886 - Gate macros behind a
macros
feature (enabled by default). #897 - Add ability to define class attributes using
#[classattr]
on functions in#[pymethods]
. #905 - Implement
Clone
forPyObject
andPy<T>
. #908 - Implement
Deref<Target = PyAny>
for all builtin types. (PyList
,PyTuple
,PyDict
etc.) #911 - Implement
Deref<Target = PyAny>
forPyCell<T>
. #911 - Add
#[classattr]
support for associated constants in#[pymethods]
. #914
Changed
- Panics will now be raised as a Python
PanicException
. #797 - Change
PyObject
andPy<T>
reference counts to decrement immediately upon drop when the GIL is held. #851 - Allow
PyIterProtocol
methods to use eitherPyRef
orPyRefMut
as the receiver type. #856 - Change the implementation of
FromPyObject
forPy<T>
to apply to a wider range ofT
, including allT: PyClass
. #880 - Move all methods from the
ObjectProtocol
trait to thePyAny
struct. #911 - Remove need for
#![feature(specialization)]
in crates depending on PyO3. #917
Removed
- Remove
PyMethodsProtocol
trait. #889 - Remove
num-traits
dependency. #895 - Remove
ObjectProtocol
trait. #911 - Remove
PyAny::None
. Users should usePython::None
instead. #911 - Remove all
*ProtocolImpl
traits. #917
Fixed
- Fix support for
__radd__
and other__r*__
methods as implementations for Python mathematical operators. #839 - Fix panics during garbage collection when traversing objects that were already mutably borrowed. #855
- Prevent
&'static
references to Python objects as arguments to#[pyfunction]
and#[pymethods]
. #869 - Fix lifetime safety bug with
AsPyRef::as_ref
. #876 - Fix
#[pyo3(get)]
attribute onPy<T>
fields. #880 - Fix segmentation faults caused by functions such as
PyList::get_item
returning borrowed objects when it was not safe to do so. #890 - Fix segmentation faults caused by nested
Python::acquire_gil
calls creating dangling references. #893 - Fix segmentatation faults when a panic occurs during a call to
Python::allow_threads
. #912
0.9.2 - 2020-04-09
Added
FromPyObject
implementations forHashSet
andBTreeSet
. #842
Fixed
- Correctly detect 32bit architecture. #830
0.9.1 - 2020-03-23
Fixed
0.9.0 - 2020-03-19
Added
PyCell
, which has RefCell-like features. #770PyClass
,PyLayout
,PyClassInitializer
. #683- Implemented
IntoIterator
forPySet
andPyFrozenSet
. #716 FromPyObject
is now automatically implemented forT: Clone
pyclasses. #730#[pyo3(get)]
and#[pyo3(set)]
will now use the Rust doc-comment from the field for the Python property. #755#[setter]
functions may now take an argument ofPyo3::Python
. #760PyTypeInfo::BaseLayout
andPyClass::BaseNativeType
. #770PyDowncastImpl
. #770- Implement
FromPyObject
andIntoPy<PyObject>
traits for arrays (up to 32). #778 migration.md
andtypes.md
in the guide. #795, #802ffi::{_PyBytes_Resize, _PyDict_Next, _PyDict_Contains, _PyDict_GetDictPtr}
. #820
Changed
#[new]
does not takePyRawObject
and can returnSelf
. #683- The blanket implementations for
FromPyObject
for&T
and&mut T
are no longer specializable. ImplementPyTryFrom
for your type to control the behavior ofFromPyObject::extract
for your types. #713 - The implementation for
IntoPy<U> for T
whereU: FromPy<T>
is no longer specializable. Control the behavior of this via the implementation ofFromPy
. #713 - Use
parking_lot::Mutex
instead ofspin::Mutex
. #734 - Bumped minimum Rust version to
1.42.0-nightly 2020-01-21
. #761 PyRef
andPyRefMut
are renewed forPyCell
. #770- Some new FFI functions for Python 3.8. #784
PyAny
is now on the top level module and prelude. #816
Removed
PyRawObject
. #683PyNoArgsFunction
. #741initialize_type
. To set the module name for a#[pyclass]
, use themodule
argument to the macro. #751AsPyRef::as_mut/with/with_mut/into_py/into_mut_py
. #770PyTryFrom::try_from_mut/try_from_mut_exact/try_from_mut_unchecked
. #770Python::mut_from_owned_ptr/mut_from_borrowed_ptr
. #770ObjectProtocol::get_base/get_mut_base
. #770
Fixed
- Fixed unsoundness of subclassing. #683.
- Clear error indicator when the exception is handled on the Rust side. #719
- Usage of raw identifiers with
#[pyo3(set)]
. #745 - Usage of
PyObject
with#[pyo3(get)]
. #760 #[pymethods]
used in conjunction with#[cfg]
. #769"*"
in a#[pyfunction()]
argument list incorrectly accepting any number of positional arguments (useargs = "*"
when this behaviour is desired). #792PyModule::dict
. #809- Fix the case where
DESCRIPTION
is not null-terminated. #822
[0.8.5] - 2020-01-05
Added
- Implemented
FromPyObject
forHashMap
andBTreeMap
- Support for
#[name = "foo"]
attribute for#[pyfunction]
and in#[pymethods]
. #692
0.8.4 - 2019-12-14
Added
- Support for
#[text_signature]
attribute. #675
0.8.3 - 2019-11-23
Removed
#[init]
is removed. #658
Fixed
- Now all
&Py~
types have!Send
bound. #655 - Fix a compile error raised by the stabilization of
!
type. #672.
0.8.2 - 2019-10-27
Added
- FFI compatibility for PEP 590 Vectorcall. #641
Fixed
- Fix PySequenceProtocol::set_item. #624
- Fix a corner case of BigInt::FromPyObject. #630
- Fix index errors in parameter conversion. #631
- Fix handling of invalid utf-8 sequences in
PyString::as_bytes
. #639 andPyString::to_string_lossy
#642. - Remove
__contains__
and__iter__
from PyMappingProtocol. #644 - Fix proc-macro definition of PySetAttrProtocol. #645
0.8.1 - 2019-10-08
Added
- Conversion between num-bigint and Python int. #608
Fixed
- Make sure the right Python interpreter is used in OSX builds. #604
- Patch specialization being broken by Rust 1.40. #614
- Fix a segfault around PyErr. #597
0.8.0 - 2019-09-16
Added
module
argument topyclass
macro. #499py_run!
macro #512- Use existing fields and methods before calling custom getattr. #505
PyBytes
can now be indexed just likeVec<u8>
- Implement
IntoPy<PyObject>
forPyRef
andPyRefMut
.
Changed
- Implementing the Using the
gc
parameter forpyclass
(e.g.#[pyclass(gc)]
) without implementing theclass::PyGCProtocol
trait is now a compile-time error. Failing to implement this trait could lead to segfaults. #532 PyByteArray::data
has been replaced withPyDataArray::to_vec
because returning a&[u8]
is unsound. (See this comment for a great write-up for why that was unsound)- Replace
mashup
withpaste
. GILPool
gained aPython
marker to prevent it from being misused to release Python objects without the GIL held.
Removed
IntoPyObject
was replaced withIntoPy<PyObject>
#[pyclass(subclass)]
is hidden aunsound-subclass
feature because it's causing segmentation faults.
Fixed
- More readable error message for generics in pyclass #503
0.7.0 - 2019-05-26
Added
- PyPy support by omerbenamram in #393
- Have
PyModule
generate an index of its members (__all__
list). - Allow
slf: PyRef<T>
for pyclass(#419) - Allow to use lifetime specifiers in
pymethods
- Add
marshal
module. #460
Changed
Python::run
returnsPyResult<()>
instead ofPyResult<&PyAny>
.- Methods decorated with
#[getter]
and#[setter]
can now omit wrapping the result type inPyResult
if they don't raise exceptions.
Fixed
type_object::PyTypeObject
has been marked unsafe because breaking the contracttype_object::PyTypeObject::init_type
can lead to UB.- Fixed automatic derive of
PySequenceProtocol
implementation in #423. - Capitalization & better wording to README.md.
- Docstrings of properties is now properly set using the doc of the
#[getter]
method. - Fixed issues with
pymethods
crashing on doc comments containing double quotes. PySet::new
andPyFrozenSet::new
now returnPyResult<&Py[Frozen]Set>
; exceptions are raised if the items are not hashable.- Fixed building using
venv
on Windows. PyTuple::new
now returns&PyTuple
instead ofPy<PyTuple>
.- Fixed several issues with argument parsing; notable, the
*args
and**kwargs
tuple/dict now doesn't contain arguments that are otherwise assigned to parameters.
0.6.0 - 2019-03-28
Regressions
- Currently, #341 causes
cargo test
to fail with weird linking errors when theextension-module
feature is activated. For now you can work around this by making theextension-module
feature optional and running the tests withcargo test --no-default-features
:
[dependencies.pyo3]
version = "0.6.0"
[features]
extension-module = ["pyo3/extension-module"]
default = ["extension-module"]
Added
- Added a
wrap_pymodule!
macro similar to the existingwrap_pyfunction!
macro. Only available on python 3 - Added support for cross compiling (e.g. to arm v7) by mtp401 in #327. See the "Cross Compiling" section in the "Building and Distribution" chapter of the guide for more details.
- The
PyRef
andPyRefMut
types, which allow to differentiate between an instance of a rust struct on the rust heap and an instance that is embedded inside a python object. By kngwyu in #335 - Added
FromPy<T>
andIntoPy<T>
which are equivalent toFrom<T>
andInto<T>
except that they require a gil token. - Added
ManagedPyRef
, which should eventually replaceToBorrowedObject
.
Changed
- Renamed
PyObjectRef
toPyAny
in #388 - Renamed
add_function
toadd_wrapped
as it now also supports modules. - Renamed
#[pymodinit]
to#[pymodule]
py.init(|| value)
becomesPy::new(value)
py.init_ref(|| value)
becomesPyRef::new(value)
py.init_mut(|| value)
becomesPyRefMut::new(value)
.PyRawObject::init
is now infallible, e.g. it returns()
instead ofPyResult<()>
.- Renamed
py_exception!
tocreate_exception!
and refactored the error macros. - Renamed
wrap_function!
towrap_pyfunction!
- Renamed
#[prop(get, set)]
to#[pyo3(get, set)]
#[pyfunction]
now supports the same arguments as#[pyfn()]
- Some macros now emit proper spanned errors instead of panics.
- Migrated to the 2018 edition
crate::types::exceptions
moved tocrate::exceptions
- Replace
IntoPyTuple
withIntoPy<Py<PyTuple>>
. IntoPyPointer
andToPyPointer
moved into the crate root.class::CompareOp
moved intoclass::basic::CompareOp
- PyTypeObject is now a direct subtrait PyTypeCreate, removing the old cyclical implementation in #350
- Add
PyList::{sort, reverse}
by chr1sj0nes in #357 and #358 - Renamed the
typeob
module totype_object
Removed
PyToken
was removed due to unsoundness (See #94).- Removed the unnecessary type parameter from
PyObjectAlloc
NoArgs
. Just use an empty tuplePyObjectWithGIL
.PyNativeType
is sufficient now that PyToken is removed.
Fixed
- A soudness hole where every instances of a
#[pyclass]
struct was considered to be part of a python object, even though you can create instances that are not part of the python heap. This was fixed throughPyRef
andPyRefMut
. - Fix kwargs support in #328.
- Add full support for
__dict__
in #403.
0.5.3 - 2019-01-04
Fixed
- Fix memory leak in ArrayList by kngwyu #316
0.5.2 - 2018-11-25
Fixed
- Fix undeterministic segfaults when creating many objects by kngwyu in #281
[0.5.1] - 2018-11-24
Yanked
0.5.0 - 2018-11-11
Added
#[pyclass]
objects can now be returned from rust functionsPyComplex
by kngwyu in #226PyDict::from_sequence
, equivalent todict([(key, val), ...])
- Bindings for the
datetime
standard library types:PyDate
,PyTime
,PyDateTime
,PyTzInfo
,PyDelta
with associatedffi
types, by pganssle #200. PyString
,PyUnicode
, andPyBytes
now have anas_bytes
method that returns&[u8]
.PyObjectProtocol::get_type_ptr
by ijl in #242
Changed
- Removes the types from the root module and the prelude. They now live in
pyo3::types
instead. - All exceptions are constructed with
py_err
instead ofnew
, as they returnPyErr
and notSelf
. as_mut
and friends take and&mut self
instead of&self
ObjectProtocol::call
now takes anOption<&PyDict>
for the kwargs instead of anIntoPyDictPointer
.IntoPyDictPointer
was replace byIntoPyDict
which doesn't convertPyDict
itself anymore and returns aPyDict
instead of*mut PyObject
.PyTuple::new
now takes anIntoIterator
instead of a slice- Updated to syn 0.15
- Splitted
PyTypeObject
intoPyTypeObject
without the create method andPyTypeCreate
with requiresPyObjectAlloc<Self> + PyTypeInfo + Sized
. - Ran
cargo edition --fix
which prefixed path withcrate::
for rust 2018 - Renamed
async
topyasync
as async will be a keyword in the 2018 edition. - Starting to use
NonNull<*mut PyObject>
for Py and PyObject by ijl #260
Removed
- Removed most entries from the prelude. The new prelude is small and clear.
- Slowly removing specialization uses
PyString
,PyUnicode
, andPyBytes
no longer have adata
method (replaced byas_bytes
) andPyStringData
has been removed.- The pyobject_extract macro
Fixed
- Added an explanation that the GIL can temporarily be released even while holding a GILGuard.
- Lots of clippy errors
- Fix segfault on calling an unknown method on a PyObject
- Work around a bug in the rust compiler by kngwyu #252
- Fixed a segfault with subclassing pyo3 create classes and using
__class__
by kngwyu #263
0.4.1 - 2018-08-20
Changed
- PyTryFrom's error is always to
PyDowncastError
Fixed
- Fixed compilation on nightly since
use_extern_macros
was stabilized
Removed
- The pyobject_downcast macro
0.4.0 - 2018-07-30
Changed
- Merged both examples into one
- Rustfmt all the things :heavy_check_mark:
- Switched to Keep a Changelog
Removed
- Conversions from tuples to PyDict due to rust-lang/rust#52050
0.3.2 - 2018-07-22
Changed
- Replaced
concat_idents
with mashup
0.3.1 - 2018-07-18
Fixed
- Fixed scoping bug in pyobject_native_type that would break rust-numpy
0.3.0 - 2018-07-18
Added
- A few internal macros became part of the public api (#155, #186)
- Always clone in getters. This allows using the get-annotation on all Clone-Types
Changed
- Upgraded to syn 0.14 which means much better error messages :tada:
- 128 bit integer support by kngwyu (#137)
proc_macro
has been stabilized on nightly (rust-lang/rust#52081). This means that we can remove theproc_macro
feature, but now we need theuse_extern_macros
from the 2018 edition instead.- All proc macro are now prefixed with
py
and live in the prelude. This means you can use#[pyclass]
,#[pymethods]
,#[pyproto]
,#[pyfunction]
and#[pymodinit]
directly, at least after ause pyo3::prelude::*
. They were also moved into a module calledproc_macro
. You shouldn't use#[pyo3::proc_macro::pyclass]
or other longer paths in attributes becauseproc_macro_path_invoc
isn't going to be stabilized soon. - Renamed the
base
option in thepyclass
macro toextends
. #[pymodinit]
uses the function name as module name, unless the name is overrriden with#[pymodinit(name)]
- The guide is now properly versioned.
0.2.7 - 2018-05-18
Fixed
- Fix nightly breakage with proc_macro_path
0.2.6 - 2018-04-03
Fixed
- Fix compatibility with TryFrom trait #137
0.2.5 - 2018-02-21
Added
- CPython 3.7 support
Fixed
- Embedded CPython 3.7b1 crashes on initialization #110
- Generated extension functions are weakly typed #108
- call_method* crashes when the method does not exist #113
- Allow importing exceptions from nested modules #116
0.2.4 - 2018-01-19
Added
- Allow to get mutable ref from PyObject #106
- Drop
RefFromPyObject
trait - Add Python::register_any method
Fixed
- Fix impl
FromPyObject
forPy<T>
- Mark method that work with raw pointers as unsafe #95
0.2.3 - 11-27-2017
Changed
- Rustup to 1.23.0-nightly 2017-11-07
Fixed
- Proper
c_char
usage #93
Removed
- Remove use of now unneeded 'AsciiExt' trait
0.2.2 - 09-26-2017
Changed
- Rustup to 1.22.0-nightly 2017-09-30
0.2.1 - 09-26-2017
Fixed
- Fix rustc const_fn nightly breakage
0.2.0 - 08-12-2017
Added
- Added inheritance support #15
- Added weakref support #56
- Added subclass support #64
- Added
self.__dict__
supoort #68 - Added
pyo3::prelude
module #70 - Better
Iterator
support for PyTuple, PyList, PyDict #75 - Introduce IntoPyDictPointer similar to IntoPyTuple #69
Changed
- Allow to add gc support without implementing PyGCProtocol #57
- Refactor
PyErr
implementation. Droppy
parameter from constructor.
0.1.0 - 07-23-2017
Added
- Initial release
Contributing
Thank you for your interest in contributing to PyO3! All are welcome - please consider reading our Code of Conduct to keep our community positive and inclusive.
If you are searching for ideas how to contribute, proceed to the "Getting started contributing" section. If you have found a specific issue to contribute to and need information about the development process, you may find the section "Writing pull requests" helpful.
If you want to become familiar with the codebase, see Architecture.md.
Getting started contributing
Please join in with any part of PyO3 which interests you. We use GitHub issues to record all bugs and ideas. Feel free to request an issue to be assigned to you if you want to work on it.
You can browse the API of the non-public parts of PyO3 here.
The following sections also contain specific ideas on where to start contributing to PyO3.
Setting up a development environment
To work and develop PyO3, you need Python & Rust installed on your system.
- We encourage the use of rustup to be able to select and choose specific toolchains based on the project.
- Pyenv is also highly recommended for being able to choose a specific Python version.
- virtualenv can also be used with or without Pyenv to use specific installed Python versions.
nox
is used to automate many of our CI tasks.
Caveats
- When using pyenv on macOS, installing a Python version using
--enable-shared
is required to make it work. i.eenv PYTHON_CONFIGURE_OPTS="--enable-shared" pyenv install 3.7.12
Help users identify bugs
The PyO3 Gitter channel is very active with users who are new to PyO3, and often completely new to Rust. Helping them debug is a great way to get experience with the PyO3 codebase.
Helping others often reveals bugs, documentation weaknesses, and missing APIs. It's a good idea to open GitHub issues for these immediately so the resolution can be designed and implemented!
Implement issues ready for development
Issues where the solution is clear and work is not in progress use the needs-implementer label.
Don't be afraid if the solution is not clear to you! The core PyO3 contributors will be happy to mentor you through any questions you have to help you write the solution.
Help write great docs
PyO3 has a user guide (using mdbook) as well as the usual Rust API docs. The aim is for both of these to be detailed, easy to understand, and up-to-date. Pull requests are always welcome to fix typos, change wording, add examples, etc.
There are some specific areas of focus where help is currently needed for the documentation:
- Issues requesting documentation improvements are tracked with the documentation label.
- Not all APIs had docs or examples when they were made. The goal is to have documentation on all PyO3 APIs (#306). If you see an API lacking a doc, please write one and open a PR!
You can build the docs (including all features) with
cargo xtask doc --open
Doctests
We use lots of code blocks in our docs. Run cargo test --doc
when making changes to check that
the doctests still work, or cargo test
to run all the tests including doctests. See
https://doc.rust-lang.org/rustdoc/documentation-tests.html for a guide on doctests.
Building the guide
You can preview the user guide by building it locally with mdbook
.
First, install mdbook
and nox
. Then, run
nox -s build-guide -- --open
Help design the next PyO3
Issues which don't yet have a clear solution use the needs-design label.
If any of these issues interest you, please join in with the conversation on the issue! All opinions are valued, and if you're interested in going further with e.g. draft PRs to experiment with API designs, even better!
Review pull requests
Everybody is welcome to submit comments on open PRs. Please help ensure new PyO3 APIs are safe, performant, tidy, and easy to use!
Writing pull requests
Here are a few things to note when you are writing PRs.
Continuous Integration
The PyO3 repo uses GitHub Actions. PRs are blocked from merging if CI is not successful.
Formatting, linting and tests are checked for all Rust and Python code. In addition, all warnings in Rust code are disallowed (using RUSTFLAGS="-D warnings"
).
Tests run with all supported Python versions with the latest stable Rust compiler, as well as for Python 3.9 with the minimum supported Rust version.
If you are adding a new feature, you should add it to the full
feature in our Cargo.toml* so that it is tested in CI.
You can run these tests yourself with
cargo xtask ci
See its documentation for more commands you can run.
Documenting changes
We use towncrier to generate a CHANGELOG for each release.
To include your changes in the release notes, you should create one (or more) news items in the newsfragments
directory. Valid news items should be saved as <PR>.<CATEGORY>.md
where <PR>
is the pull request number and <CATEGORY>
is one of the following:
packaging
- for dependency changes and Python / Rust version compatibility changesadded
- for new featureschanged
- for features which already existed but have been altered or deprecatedremoved
- for features which have been removedfixed
- for "changed" features which were classed as a bugfix
Python and Rust version support policy
PyO3 aims to keep sufficient compatibility to make packaging Python extensions built with PyO3 feasible on most common package managers.
To keep package maintainers' lives simpler, PyO3 will commit, wherever possible, to only adjust minimum supported Rust and Python versions at the same time. This bump will only come in an 0.x
release, roughly once per year, after the oldest supported Python version reaches its end-of-life. (Check https://endoflife.date/python for a clear timetable on these.)
Below are guidelines on what compatibility all PRs are expected to deliver for each language.
Python
PyO3 supports all officially supported Python versions, as well as the latest PyPy3 release. All of these versions are tested in CI.
Rust
PyO3 aims to make use of up-to-date Rust language features to keep the implementation as efficient as possible.
The minimum Rust version supported will be decided when the release which bumps Python and Rust versions is made. At the time, the minimum Rust version will be set no higher than the lowest Rust version shipped in the current Debian, RHEL and Alpine Linux distributions.
CI tests both the most recent stable Rust version and the minimum supported Rust version. Because of Rust's stability guarantees this is sufficient to confirm support for all Rust versions in between.
Benchmarking
PyO3 has two sets of benchmarks for evaluating some aspects of its performance. The benchmark suite is currently very small - please open PRs with new benchmarks if you're interested in helping to expand it!
First, there are Rust-based benchmarks located in the benches
subdirectory. As long as you have a nightly rust compiler available on your system, you can run these benchmarks with:
cargo +nightly bench
Second, there is a Python-based benchmark contained in the pytests
subdirectory. You can read more about it here.
Code coverage
You can view what code is and isn't covered by PyO3's tests. We aim to have 100% coverage - please check coverage and add tests if you notice a lack of coverage!
- First, generate a
lcov.info
file with
cargo xtask coverage
You can install an IDE plugin to view the coverage. For example, if you use VSCode:
- Add the coverage-gutters plugin.
- Add these settings to VSCode's
settings.json
:
{
"coverage-gutters.coverageFileNames": [
"lcov.info",
"cov.xml",
"coverage.xml",
],
"coverage-gutters.showLineCoverage": true
}
- You should now be able to see green highlights for code that is tested, and red highlights for code that is not tested.
Sponsor this project
At the moment there is no official organisation that accepts sponsorship on PyO3's behalf. If you're seeking to provide significant funding to the PyO3 ecosystem, please reach out to us on GitHub or Gitter and we can discuss.
In the meanwhile, some of our maintainers have personal GitHub sponsorship pages and would be grateful for your support: