The PyO3 user guide

Welcome to the PyO3 user guide! This book is a companion to PyO3's API docs. It contains examples and documentation to explain all of PyO3's use cases in detail.

Please choose from the chapters on the left to jump to individual topics, or continue below to start with PyO3's README.

PyO3

actions status benchmark codecov crates.io minimum rustc 1.41 dev chat contributing notes

Rust bindings for Python, including tools for creating native Python extension modules. Running and interacting with Python code from a Rust binary is also supported.

Usage

PyO3 supports the following software versions:

  • Python 3.6 and up (CPython and PyPy)
  • Rust 1.41 and up

You can use PyO3 to write a native Python module in Rust, or to embed Python in a Rust binary. The following sections explain each of these in turn.

Using Rust from Python

PyO3 can be used to generate a native Python module. The easiest way to try this out for the first time is to use maturin. maturin is a tool for building and publishing Rust-based Python packages with minimal configuration. The following steps set up some files for an example Python module, install maturin, and then show how build and import the Python module.

First, create a new folder (let's call it string_sum) containing the following two files:

Cargo.toml

[package]
name = "string-sum"
version = "0.1.0"
edition = "2018"

[lib]
name = "string_sum"
# "cdylib" is necessary to produce a shared library for Python to import from.
#
# Downstream Rust code (including code in `bin/`, `examples/`, and `tests/`) will not be able
# to `use string_sum;` unless the "rlib" or "lib" crate type is also included, e.g.:
# crate-type = ["cdylib", "rlib"]
crate-type = ["cdylib"]

[dependencies.pyo3]
version = "0.14.3"
features = ["extension-module"]

src/lib.rs


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

/// Formats the sum of two numbers as string.
#[pyfunction]
fn sum_as_string(a: usize, b: usize) -> PyResult<String> {
    Ok((a + b).to_string())
}

/// A Python module implemented in Rust. The name of this function must match
/// the `lib.name` setting in the `Cargo.toml`, else Python will not be able to
/// import the module.
#[pymodule]
fn string_sum(_py: Python, m: &PyModule) -> PyResult<()> {
    m.add_function(wrap_pyfunction!(sum_as_string, m)?)?;

    Ok(())
}
}

With those two files in place, now maturin needs to be installed. This can be done using Python's package manager pip. First, load up a new Python virtualenv, and install maturin into it:

$ cd string_sum
$ python -m venv .env
$ source .env/bin/activate
$ pip install maturin

Now build and execute the module:

$ maturin develop
# lots of progress output as maturin runs the compilation...
$ python
>>> import string_sum
>>> string_sum.sum_as_string(5, 20)
'25'

As well as with maturin, it is possible to build using setuptools-rust or manually. Both offer more flexibility than maturin but require further configuration.

Using Python from Rust

To embed Python into a Rust binary, you need to ensure that your Python installation contains a shared library. The following steps demonstrate how to ensure this (for Ubuntu), and then give some example code which runs an embedded Python interpreter.

To install the Python shared library on Ubuntu:

sudo apt install python3-dev

Start a new project with cargo new and add pyo3 to the Cargo.toml like this:

[dependencies.pyo3]
version = "0.14.3"
features = ["auto-initialize"]

Example program displaying the value of sys.version and the current user name:

use pyo3::prelude::*;
use pyo3::types::IntoPyDict;

fn main() -> PyResult<()> {
    Python::with_gil(|py| {
        let sys = py.import("sys")?;
        let version: String = sys.get("version")?.extract()?;

        let locals = [("os", py.import("os")?)].into_py_dict(py);
        let code = "os.getenv('USER') or os.getenv('USERNAME') or 'Unknown'";
        let user: String = py.eval(code, None, Some(&locals))?.extract()?;

        println!("Hello {}, I'm Python {}", user, version);
        Ok(())
    })
}

The guide has a section with lots of examples about this topic.

Tools and libraries

  • maturin Zero configuration build tool for Rust-made Python extensions.
  • setuptools-rust Setuptools plugin for Rust support.
  • pyo3-built Simple macro to expose metadata obtained with the built crate as a PyDict
  • rust-numpy Rust binding of NumPy C-API
  • dict-derive Derive FromPyObject to automatically transform Python dicts into Rust structs
  • pyo3-log Bridge from Rust to Python logging
  • pythonize Serde serializer for converting Rust objects to JSON-compatible Python objects
  • pyo3-asyncio Utilities for working with Python's Asyncio library and async functions

Examples

  • hyperjson A hyper-fast Python module for reading/writing JSON data using Rust's serde-json
  • html-py-ever Using html5ever through kuchiki to speed up html parsing and css-selecting.
  • point-process High level API for pointprocesses as a Python library
  • autopy A simple, cross-platform GUI automation library for Python and Rust.
    • Contains an example of building wheels on TravisCI and appveyor using cibuildwheel
  • orjson Fast Python JSON library
  • inline-python Inline Python code directly in your Rust code
  • Rogue-Gym Customizable rogue-like game for AI experiments
    • Contains an example of building wheels on Azure Pipelines
  • fastuuid Python bindings to Rust's UUID library
  • wasmer-python Python library to run WebAssembly binaries
  • mocpy Astronomical Python library offering data structures for describing any arbitrary coverage regions on the unit sphere
  • tokenizers Python bindings to the Hugging Face tokenizers (NLP) written in Rust
  • pyre Fast Python HTTP server written in Rust
  • jsonschema-rs Fast JSON Schema validation library
  • css-inline CSS inlining for Python implemented in Rust
  • cryptography Python cryptography library with some functionality in Rust
  • polaroid Hyper Fast and safe image manipulation library for Python written in Rust
  • ormsgpack Fast Python msgpack library

Contributing

Everyone is welcomed to contribute to PyO3! There are many ways to support the project, such as:

  • help PyO3 users with issues on Github and Gitter
  • improve documentation
  • write features and bugfixes
  • publish blogs and examples of how to use PyO3

Our contributing notes and architecture guide have more resources if you wish to volunteer time for PyO3 and are searching where to start.

If you don't have time to contribute yourself but still wish to support the project's future success, some of our maintainers have Github sponsorship pages:

License

PyO3 is licensed under the Apache-2.0 license. Python is licensed under the Python License.

Python Modules

You can create a module using #[pymodule]:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

#[pyfunction]
fn double(x: usize) -> usize {
    x * 2
}

/// This module is implemented in Rust.
#[pymodule]
fn my_extension(py: Python, m: &PyModule) -> PyResult<()> {
    m.add_function(wrap_pyfunction!(double, m)?)?;
    Ok(())
}
}

The #[pymodule] procedural macro takes care of exporting the initialization function of your module to Python.

The module's name defaults to the name of the Rust function. You can override the module name by using #[pyo3(name = "custom_name")]:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

#[pyfunction]
fn double(x: usize) -> usize {
    x * 2
}

#[pymodule]
#[pyo3(name = "custom_name")]
fn my_extension(py: Python, m: &PyModule) -> PyResult<()> {
    m.add_function(wrap_pyfunction!(double, m)?)?;
    Ok(())
}
}

The name of the module must match the name of the .so or .pyd file. Otherwise, you will get an import error in Python with the following message: ImportError: dynamic module does not define module export function (PyInit_name_of_your_module)

To import the module, either:

Documentation

The Rust doc comments of the module initialization function will be applied automatically as the Python docstring of your module.

For example, building off of the above code, this will print This module is implemented in Rust.:

import my_extension

print(my_extension.__doc__)

Organizing your module registration code

For most projects, it's adequate to centralize all your FFI code into a single Rust module.

However, for larger projects, it can be helpful to split your Rust code into several Rust modules to keep your code readable. Unfortunately, though, some of the macros like wrap_pyfunction! do not yet work when used on code defined in other modules (#1709). One way to work around this is to pass references to the PyModule so that each module registers its own FFI code. For example:


#![allow(unused)]
fn main() {
// src/lib.rs
use pyo3::prelude::*;

#[pymodule]
fn my_extension(py: Python, m: &PyModule) -> PyResult<()> {
    dirutil::register(py, m)?;
    osutil::register(py, m)?;
    Ok(())
}

// src/dirutil.rs
mod dirutil {
use pyo3::prelude::*;

pub(crate) fn register(py: Python, m: &PyModule) -> PyResult<()> {
    m.add_class::<SomeClass>()?;
    Ok(())
}

#[pyclass]
struct SomeClass {
    x: usize,
}
}

// src/osutil.rs
mod osutil {
use pyo3::prelude::*;

pub(crate) fn register(py: Python, m: &PyModule) -> PyResult<()> {
    m.add_function(wrap_pyfunction!(determine_current_os, m)?)?;
    Ok(())
}

#[pyfunction]
fn determine_current_os() -> String {
    "linux".to_owned()
}
}
}

Another workaround for splitting FFI code across multiple modules (#1709) is to add use module::*, like this:


#![allow(unused)]
fn main() {
// src/lib.rs
use pyo3::prelude::*;
use osutil::*;

#[pymodule]
fn my_extension(py: Python, m: &PyModule) -> PyResult<()> {
    m.add_function(wrap_pyfunction!(determine_current_os, m)?)?;
    Ok(())
}

// src/osutil.rs
mod osutil {
use pyo3::prelude::*;

#[pyfunction]
pub(crate) fn determine_current_os() -> String {
    "linux".to_owned()
}
}
}

Python submodules

You can create a module hierarchy within a single extension module by using PyModule.add_submodule(). For example, you could define the modules parent_module and parent_module.child_module.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

#[pymodule]
fn parent_module(py: Python, m: &PyModule) -> PyResult<()> {
    register_child_module(py, m)?;
    Ok(())
}

fn register_child_module(py: Python, parent_module: &PyModule) -> PyResult<()> {
    let child_module = PyModule::new(py, "child_module")?;
    child_module.add_function(wrap_pyfunction!(func, child_module)?)?;
    parent_module.add_submodule(child_module)?;
    Ok(())
}

#[pyfunction]
fn func() -> String {
    "func".to_string()
}

Python::with_gil(|py| {
   use pyo3::wrap_pymodule;
   use pyo3::types::IntoPyDict;
   let parent_module = wrap_pymodule!(parent_module)(py);
   let ctx = [("parent_module", parent_module)].into_py_dict(py);

   py.run("assert parent_module.child_module.func() == 'func'", None, Some(&ctx)).unwrap();
})
}

Note that this does not define a package, so this won’t allow Python code to directly import submodules by using from parent_module import child_module. For more information, see #759 and #1517.

It is not necessary to add #[pymodule] on nested modules, which is only required on the top-level module.

Python Functions

PyO3 supports two ways to define a free function in Python. Both require registering the function to a module.

One way is annotating a function with #[pyfunction] and then adding it to the module using the wrap_pyfunction! macro.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

#[pyfunction]
fn double(x: usize) -> usize {
    x * 2
}

#[pymodule]
fn my_extension(py: Python, m: &PyModule) -> PyResult<()> {
    m.add_function(wrap_pyfunction!(double, m)?)?;
    Ok(())
}
}

Alternatively, there is a shorthand: the function can be placed inside the module definition and annotated with #[pyfn], as below:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

#[pymodule]
fn my_extension(py: Python, m: &PyModule) -> PyResult<()> {

    #[pyfn(m)]
    fn double(x: usize) -> usize {
        x * 2
    }

    Ok(())
}
}

#[pyfn(m)] is just syntactic sugar for #[pyfunction], and takes all the same options documented in the rest of this chapter. The code above is expanded to the following:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

#[pymodule]
fn my_extension(py: Python, m: &PyModule) -> PyResult<()> {

    #[pyfunction]
    fn double(x: usize) -> usize {
        x * 2
    }

    m.add_function(wrap_pyfunction!(double, m)?)?;
    Ok(())
}
}

Function options

The #[pyo3] attribute can be used to modify properties of the generated Python function. It can take any combination of the following options:

  • #[pyo3(name = "...")]

    Overrides the name generated in Python:

    
    #![allow(unused)]
    fn main() {
    use pyo3::prelude::*;
    
    #[pyfunction]
    #[pyo3(name = "no_args")]
    fn no_args_py() -> usize { 42 }
    
    #[pymodule]
    fn module_with_functions(py: Python, m: &PyModule) -> PyResult<()> {
        m.add_function(wrap_pyfunction!(no_args_py, m)?)?;
        Ok(())
    }
    
    Python::with_gil(|py| {
        let m = pyo3::wrap_pymodule!(module_with_functions)(py);
        assert!(m.getattr(py, "no_args").is_ok());
        assert!(m.getattr(py, "no_args_py").is_err());
    });
    }
    

Argument parsing

The #[pyfunction] attribute supports specifying details of argument parsing. The details are given in the section "Method arguments" of the Classes chapter. Here is an example for a function that accepts arbitrary keyword arguments (**kwargs in Python syntax) and returns the number that was passed:

use pyo3::prelude::*;
use pyo3::types::PyDict;

#[pyfunction(kwds="**")]
fn num_kwds(kwds: Option<&PyDict>) -> usize {
    kwds.map_or(0, |dict| dict.len())
}

#[pymodule]
fn module_with_functions(py: Python, m: &PyModule) -> PyResult<()> {
    m.add_function(wrap_pyfunction!(num_kwds, m)?).unwrap();
    Ok(())
}

fn main() {}

Making the function signature available to Python

In order to make the function signature available to Python to be retrieved via inspect.signature, use the #[pyo3(text_signature)] annotation as in the example below. The / signifies the end of positional-only arguments. (This is not a feature of this library in particular, but the general format used by CPython for annotating signatures of built-in functions.)


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

/// This function adds two unsigned 64-bit integers.
#[pyfunction]
#[pyo3(text_signature = "(a, b, /)")]
fn add(a: u64, b: u64) -> u64 {
    a + b
}
}

This also works for classes and methods:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::PyType;

// it works even if the item is not documented:

#[pyclass]
#[pyo3(text_signature = "(c, d, /)")]
struct MyClass {}

#[pymethods]
impl MyClass {
    // the signature for the constructor is attached
    // to the struct definition instead.
    #[new]
    fn new(c: i32, d: &str) -> Self {
        Self {}
    }
    // the self argument should be written $self
    #[pyo3(text_signature = "($self, e, f)")]
    fn my_method(&self, e: i32, f: i32) -> i32 {
        e + f
    }
    #[classmethod]
    #[pyo3(text_signature = "(cls, e, f)")]
    fn my_class_method(cls: &PyType, e: i32, f: i32) -> i32 {
        e + f
    }
    #[staticmethod]
    #[pyo3(text_signature = "(e, f)")]
    fn my_static_method(e: i32, f: i32) -> i32 {
        e + f
    }
}
}

Note that text_signature on classes is not compatible with compilation in abi3 mode until Python 3.10 or greater.

Making the function signature available to Python (old method)

Alternatively, simply make sure the first line of your docstring is formatted like in the following example. Please note that the newline after the -- is mandatory. The / signifies the end of positional-only arguments.

#[pyo3(text_signature)] should be preferred, since it will override automatically generated signatures when those are added in a future version of PyO3.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

/// add(a, b, /)
/// --
///
/// This function adds two unsigned 64-bit integers.
#[pyfunction]
fn add(a: u64, b: u64) -> u64 {
    a + b
}

// a function with a signature but without docs. Both blank lines after the `--` are mandatory.

/// sub(a, b, /)
/// --
///
///
#[pyfunction]
fn sub(a: u64, b: u64) -> u64 {
    a - b
}
}

When annotated like this, signatures are also correctly displayed in IPython.

>>> pyo3_test.add?
Signature: pyo3_test.add(a, b, /)
Docstring: This function adds two unsigned 64-bit integers.
Type:      builtin_function_or_method

Closures

Currently, there are no conversions between Fns in Rust and callables in Python. This would definitely be possible and very useful, so contributions are welcome. In the meantime, you can do the following:

Calling Python functions in Rust

You can pass Python def'd functions and built-in functions to Rust functions PyFunction corresponds to regular Python functions while PyCFunction describes built-ins such as repr().

You can also use PyAny::is_callable to check if you have a callable object. is_callable will return true for functions (including lambdas), methods and objects with a __call__ method. You can call the object with PyAny::call with the args as first parameter and the kwargs (or None) as second parameter. There are also PyAny::call0 with no args and PyAny::call1 with only positional args.

Calling Rust functions in Python

If you have a static function, you can expose it with #[pyfunction] and use wrap_pyfunction! to get the corresponding PyCFunction. For dynamic functions, e.g. lambdas and functions that were passed as arguments, you must put them in some kind of owned container, e.g. a Box. (A long-term solution will be a special container similar to wasm-bindgen's Closure). You can then use a #[pyclass] struct with that container as a field as a way to pass the function over the FFI barrier. You can even make that class callable with __call__ so it looks like a function in Python code.

Accessing the module of a function

It is possible to access the module of a #[pyfunction] in the function body by using #[pyo3(pass_module)] option:

use pyo3::prelude::*;

#[pyfunction]
#[pyo3(pass_module)]
fn pyfunction_with_module(module: &PyModule) -> PyResult<&str> {
    module.name()
}

#[pymodule]
fn module_with_fn(py: Python, m: &PyModule) -> PyResult<()> {
    m.add_function(wrap_pyfunction!(pyfunction_with_module, m)?)
}

fn main() {}

If pass_module is set, the first argument must be the &PyModule. It is then possible to use the module in the function body.

Accessing the FFI functions

In order to make Rust functions callable from Python, PyO3 generates an extern "C" function whose exact signature depends on the Rust signature. (PyO3 chooses the optimal Python argument passing convention.) It then embeds the call to the Rust function inside this FFI-wrapper function. This wrapper handles extraction of the regular arguments and the keyword arguments from the input PyObjects.

The wrap_pyfunction macro can be used to directly get a PyCFunction given a #[pyfunction] and a PyModule: wrap_pyfunction!(rust_fun, module).

Python Classes

PyO3 exposes a group of attributes powered by Rust's proc macro system for defining Python classes as Rust structs.

The main attribute is #[pyclass], which is placed upon a Rust struct to generate a Python type for it. A struct will usually also have one #[pymethods]-annotated impl block for the struct, which is used to define Python methods and constants for the generated Python type. (If the multiple-pymethods feature is enabled each #[pyclass] is allowed to have multiple #[pymethods] blocks.) Finally, there may be multiple #[pyproto] trait implementations for the struct, which are used to define certain python magic methods such as __str__.

This chapter will discuss the functionality and configuration these attributes offer. Below is a list of links to the relevant section of this chapter for each:

Defining a new class

To define a custom Python class, a Rust struct needs to be annotated with the #[pyclass] attribute.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {
    num: i32,
    debug: bool,
}
}

Because Python objects are freely shared between threads by the Python interpreter, all structs annotated with #[pyclass] must implement Send (unless annotated with #[pyclass(unsendable)]).

The above example generates implementations for PyTypeInfo, PyTypeObject, and PyClass for MyClass. To see these generated implementations, refer to the implementation details at the end of this chapter.

Adding the class to a module

Custom Python classes can then be added to a module using add_class().


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {
   num: i32,
   debug: bool,
}
#[pymodule]
fn mymodule(_py: Python, m: &PyModule) -> PyResult<()> {
    m.add_class::<MyClass>()?;
    Ok(())
}
}

PyCell and interior mutability

You sometimes need to convert your pyclass into a Python object and access it from Rust code (e.g., for testing it). PyCell is the primary interface for that.

PyCell<T: PyClass> is always allocated in the Python heap, so Rust doesn't have ownership of it. In other words, Rust code can only extract a &PyCell<T>, not a PyCell<T>.

Thus, to mutate data behind &PyCell safely, PyO3 employs the Interior Mutability Pattern like RefCell.

Users who are familiar with RefCell can use PyCell just like RefCell.

For users who are not very familiar with RefCell, here is a reminder of Rust's rules of borrowing:

  • At any given time, you can have either (but not both of) one mutable reference or any number of immutable references.
  • References must always be valid.

PyCell, like RefCell, ensures these borrowing rules by tracking references at runtime.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::PyDict;
#[pyclass]
struct MyClass {
    #[pyo3(get)]
    num: i32,
    debug: bool,
}
Python::with_gil(|py| {
    let obj = PyCell::new(py, MyClass { num: 3, debug: true }).unwrap();
    {
        let obj_ref = obj.borrow(); // Get PyRef
        assert_eq!(obj_ref.num, 3);
        // You cannot get PyRefMut unless all PyRefs are dropped
        assert!(obj.try_borrow_mut().is_err());
    }
    {
        let mut obj_mut = obj.borrow_mut(); // Get PyRefMut
        obj_mut.num = 5;
        // You cannot get any other refs until the PyRefMut is dropped
        assert!(obj.try_borrow().is_err());
        assert!(obj.try_borrow_mut().is_err());
    }

    // You can convert `&PyCell` to a Python object
    pyo3::py_run!(py, obj, "assert obj.num == 5");
});
}

&PyCell<T> is bounded by the same lifetime as a GILGuard. To make the object longer lived (for example, to store it in a struct on the Rust side), you can use Py<T>, which stores an object longer than the GIL lifetime, and therefore needs a Python<'_> token to access.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {
    num: i32,
}
fn return_myclass() -> Py<MyClass> {
    Python::with_gil(|py| Py::new(py, MyClass { num: 1 }).unwrap())
}
let obj = return_myclass();
Python::with_gil(|py|{
    let cell = obj.as_ref(py); // Py<MyClass>::as_ref returns &PyCell<MyClass>
    let obj_ref = cell.borrow(); // Get PyRef<T>
    assert_eq!(obj_ref.num, 1);
});
}

Customizing the class

The #[pyclass] macro accepts the following parameters:

  • name="XXX" - Set the class name shown in Python code. By default, the struct name is used as the class name.
  • freelist=XXX - The freelist parameter adds support of free allocation list to custom class. The performance improvement applies to types that are often created and deleted in a row, so that they can benefit from a freelist. XXX is a number of items for the free list.
  • gc - Classes with the gc parameter participate in Python garbage collection. If a custom class contains references to other Python objects that can be collected, the PyGCProtocol trait has to be implemented.
  • weakref - Adds support for Python weak references.
  • extends=BaseType - Use a custom base class. The base BaseType must implement PyTypeInfo.
  • subclass - Allows Python classes to inherit from this class.
  • dict - Adds __dict__ support, so that the instances of this type have a dictionary containing arbitrary instance variables.
  • unsendable - Making it safe to expose !Send structs to Python, where all object can be accessed by multiple threads. A class marked with unsendable panics when accessed by another thread.
  • module="XXX" - Set the name of the module the class will be shown as defined in. If not given, the class will be a virtual member of the builtins module.

Constructor

By default it is not possible to create an instance of a custom class from Python code. To declare a constructor, you need to define a method and annotate it with the #[new] attribute. Only Python's __new__ method can be specified, __init__ is not available.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {
    num: i32,
}

#[pymethods]
impl MyClass {
    #[new]
    fn new(num: i32) -> Self {
        MyClass { num }
    }
}
}

Alternatively, if your new method may fail you can return PyResult<Self>.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {
    num: i32,
}

#[pymethods]
impl MyClass {
    #[new]
    fn new(num: i32) -> PyResult<Self> {
        Ok(MyClass { num })
    }
}
}

If no method marked with #[new] is declared, object instances can only be created from Rust, but not from Python.

For arguments, see the Method arguments section below.

Return type

Generally, #[new] method have to return T: Into<PyClassInitializer<Self>> or PyResult<T> where T: Into<PyClassInitializer<Self>>.

For constructors that may fail, you should wrap the return type in a PyResult as well. Consult the table below to determine which type your constructor should return:

Cannot failMay fail
No inheritanceTPyResult<T>
Inheritance(T Inherits U)(T, U)PyResult<(T, U)>
Inheritance(General Case)PyClassInitializer<T>PyResult<PyClassInitializer<T>>

Inheritance

By default, PyAny is used as the base class. To override this default, use the extends parameter for pyclass with the full path to the base class.

For convenience, (T, U) implements Into<PyClassInitializer<T>> where U is the baseclass of T. But for more deeply nested inheritance, you have to return PyClassInitializer<T> explicitly.

To get a parent class from a child, use PyRef instead of &self for methods, or PyRefMut instead of &mut self. Then you can access a parent class by self_.as_ref() as &Self::BaseClass, or by self_.into_super() as PyRef<Self::BaseClass>.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

#[pyclass(subclass)]
struct BaseClass {
    val1: usize,
}

#[pymethods]
impl BaseClass {
    #[new]
    fn new() -> Self {
        BaseClass { val1: 10 }
    }

    pub fn method(&self) -> PyResult<usize> {
        Ok(self.val1)
    }
}

#[pyclass(extends=BaseClass, subclass)]
struct SubClass {
    val2: usize,
}

#[pymethods]
impl SubClass {
    #[new]
    fn new() -> (Self, BaseClass) {
        (SubClass { val2: 15 }, BaseClass::new())
    }

    fn method2(self_: PyRef<Self>) -> PyResult<usize> {
        let super_ = self_.as_ref();  // Get &BaseClass
        super_.method().map(|x| x * self_.val2)
    }
}

#[pyclass(extends=SubClass)]
struct SubSubClass {
    val3: usize,
}

#[pymethods]
impl SubSubClass {
    #[new]
    fn new() -> PyClassInitializer<Self> {
        PyClassInitializer::from(SubClass::new())
            .add_subclass(SubSubClass{val3: 20})
    }

    fn method3(self_: PyRef<Self>) -> PyResult<usize> {
        let v = self_.val3;
        let super_ = self_.into_super();  // Get PyRef<SubClass>
        SubClass::method2(super_).map(|x| x * v)
    }
}
Python::with_gil(|py| {
    let subsub = pyo3::PyCell::new(py, SubSubClass::new()).unwrap();
    pyo3::py_run!(py, subsub, "assert subsub.method3() == 3000")
});
}

You can also inherit native types such as PyDict, if they implement PySizedLayout. However, this is not supported when building for the Python limited API (aka the abi3 feature of PyO3).

However, because of some technical problems, we don't currently provide safe upcasting methods for types that inherit native types. Even in such cases, you can unsafely get a base class by raw pointer conversion.


#![allow(unused)]
fn main() {
#[cfg(not(Py_LIMITED_API))] {
use pyo3::prelude::*;
use pyo3::types::PyDict;
use pyo3::{AsPyPointer, PyNativeType};
use std::collections::HashMap;

#[pyclass(extends=PyDict)]
#[derive(Default)]
struct DictWithCounter {
    counter: HashMap<String, usize>,
}

#[pymethods]
impl DictWithCounter {
    #[new]
    fn new() -> Self {
        Self::default()
    }
    fn set(mut self_: PyRefMut<Self>, key: String, value: &PyAny) -> PyResult<()> {
        self_.counter.entry(key.clone()).or_insert(0);
        let py = self_.py();
        let dict: &PyDict = unsafe { py.from_borrowed_ptr_or_err(self_.as_ptr())? };
        dict.set_item(key, value)
    }
}
Python::with_gil(|py| {
    let cnt = pyo3::PyCell::new(py, DictWithCounter::new()).unwrap();
    pyo3::py_run!(py, cnt, "cnt.set('abc', 10); assert cnt['abc'] == 10")
});
}
}

If SubClass does not provide a baseclass initialization, the compilation fails.

# use pyo3::prelude::*;

#[pyclass]
struct BaseClass {
    val1: usize,
}

#[pyclass(extends=BaseClass)]
struct SubClass {
    val2: usize,
}

#[pymethods]
impl SubClass {
    #[new]
    fn new() -> Self {
        SubClass { val2: 15 }
    }
}

Object properties

PyO3 supports two ways to add properties to your #[pyclass]:

  • For simple fields with no side effects, a #[pyo3(get, set)] attribute can be added directly to the field definition in the #[pyclass].
  • For properties which require computation you can define #[getter] and #[setter] functions in the #[pymethods] block.

We'll cover each of these in the following sections.

Object properties using #[pyo3(get, set)]

For simple cases where a member variable is just read and written with no side effects, you can declare getters and setters in your #[pyclass] field definition using the pyo3 attribute, like in the example below:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {
    #[pyo3(get, set)]
    num: i32
}
}

The above would make the num field available for reading and writing as a self.num Python property. To expose the property with a different name to the field, specify this alongside the rest of the options, e.g. #[pyo3(get, set, name = "custom_name")].

Properties can be readonly or writeonly by using just #[pyo3(get)] or #[pyo3(set)] respectively.

To use these annotations, your field type must implement some conversion traits:

  • For get the field type must implement both IntoPy<PyObject> and Clone.
  • For set the field type must implement FromPyObject.

Object properties using #[getter] and #[setter]

For cases which don't satisfy the #[pyo3(get, set)] trait requirements, or need side effects, descriptor methods can be defined in a #[pymethods] impl block.

This is done using the #[getter] and #[setter] attributes, like in the example below:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {
    num: i32,
}

#[pymethods]
impl MyClass {
    #[getter]
    fn num(&self) -> PyResult<i32> {
        Ok(self.num)
    }
}
}

A getter or setter's function name is used as the property name by default. There are several ways how to override the name.

If a function name starts with get_ or set_ for getter or setter respectively, the descriptor name becomes the function name with this prefix removed. This is also useful in case of Rust keywords like type (raw identifiers can be used since Rust 2018).


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {
    num: i32,
}
#[pymethods]
impl MyClass {
    #[getter]
    fn get_num(&self) -> PyResult<i32> {
        Ok(self.num)
    }

    #[setter]
    fn set_num(&mut self, value: i32) -> PyResult<()> {
        self.num = value;
        Ok(())
    }
}
}

In this case, a property num is defined and available from Python code as self.num.

Both the #[getter] and #[setter] attributes accept one parameter. If this parameter is specified, it is used as the property name, i.e.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {
   num: i32,
}
#[pymethods]
impl MyClass {
    #[getter(number)]
    fn num(&self) -> PyResult<i32> {
        Ok(self.num)
    }

    #[setter(number)]
    fn set_num(&mut self, value: i32) -> PyResult<()> {
        self.num = value;
        Ok(())
    }
}
}

In this case, the property number is defined and available from Python code as self.number.

Attributes defined by #[setter] or #[pyo3(set)] will always raise AttributeError on del operations. Support for defining custom del behavior is tracked in #1778.

Instance methods

To define a Python compatible method, an impl block for your struct has to be annotated with the #[pymethods] attribute. PyO3 generates Python compatible wrappers for all functions in this block with some variations, like descriptors, class method static methods, etc.

Since Rust allows any number of impl blocks, you can easily split methods between those accessible to Python (and Rust) and those accessible only to Rust. However to have multiple #[pymethods]-annotated impl blocks for the same struct you must enable the multiple-pymethods feature of PyO3.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {
    num: i32,
}
#[pymethods]
impl MyClass {
    fn method1(&self) -> PyResult<i32> {
        Ok(10)
    }

    fn set_method(&mut self, value: i32) -> PyResult<()> {
        self.num = value;
        Ok(())
    }
}
}

Calls to these methods are protected by the GIL, so both &self and &mut self can be used. The return type must be PyResult<T> or T for some T that implements IntoPy<PyObject>; the latter is allowed if the method cannot raise Python exceptions.

A Python parameter can be specified as part of method signature, in this case the py argument gets injected by the method wrapper, e.g.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {
    num: i32,
    debug: bool,
}
#[pymethods]
impl MyClass {
    fn method2(&self, py: Python) -> PyResult<i32> {
        Ok(10)
    }
}
}

From the Python perspective, the method2 in this example does not accept any arguments.

Class methods

To create a class method for a custom class, the method needs to be annotated with the #[classmethod] attribute. This is the equivalent of the Python decorator @classmethod.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::PyType;
#[pyclass]
struct MyClass {
    num: i32,
    debug: bool,
}
#[pymethods]
impl MyClass {
    #[classmethod]
    fn cls_method(cls: &PyType) -> PyResult<i32> {
        Ok(10)
    }
}
}

Declares a class method callable from Python.

  • The first parameter is the type object of the class on which the method is called. This may be the type object of a derived class.
  • The first parameter implicitly has type &PyType.
  • For details on parameter-list, see the documentation of Method arguments section.
  • The return type must be PyResult<T> or T for some T that implements IntoPy<PyObject>.

Static methods

To create a static method for a custom class, the method needs to be annotated with the #[staticmethod] attribute. The return type must be T or PyResult<T> for some T that implements IntoPy<PyObject>.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {
    num: i32,
    debug: bool,
}
#[pymethods]
impl MyClass {
    #[staticmethod]
    fn static_method(param1: i32, param2: &str) -> PyResult<i32> {
        Ok(10)
    }
}
}

Class attributes

To create a class attribute (also called class variable), a method without any arguments can be annotated with the #[classattr] attribute. The return type must be T for some T that implements IntoPy<PyObject>.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {}
#[pymethods]
impl MyClass {
    #[classattr]
    fn my_attribute() -> String {
        "hello".to_string()
    }
}

Python::with_gil(|py| {
    let my_class = py.get_type::<MyClass>();
    pyo3::py_run!(py, my_class, "assert my_class.my_attribute == 'hello'")
});
}

Note that unlike class variables defined in Python code, class attributes defined in Rust cannot be mutated at all:

// Would raise a `TypeError: can't set attributes of built-in/extension type 'MyClass'`
pyo3::py_run!(py, my_class, "my_class.my_attribute = 'foo'")

If the class attribute is defined with const code only, one can also annotate associated constants:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {}
#[pymethods]
impl MyClass {
    #[classattr]
    const MY_CONST_ATTRIBUTE: &'static str = "foobar";
}
}

Callable objects

To specify a custom __call__ method for a custom class, the method needs to be annotated with the #[call] attribute. Arguments of the method are specified as for instance methods.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::PyTuple;
#[pyclass]
struct MyClass {
    num: i32,
    debug: bool,
}
#[pymethods]
impl MyClass {
    #[call]
    #[args(args="*")]
    fn __call__(&self, args: &PyTuple) -> PyResult<i32> {
        println!("MyClass has been called");
        Ok(self.num)
    }
}
}

Method arguments

By default, PyO3 uses function signatures to determine which arguments are required. Then it scans the incoming args and kwargs parameters. If it can not find all required parameters, it raises a TypeError exception. It is possible to override the default behavior with the #[args(...)] attribute. This attribute accepts a comma separated list of parameters in the form of attr_name="default value". Each parameter has to match the method parameter by name.

Each parameter can be one of the following types:

  • "*": var arguments separator, each parameter defined after "*" is a keyword-only parameter. Corresponds to python's def meth(*, arg1.., arg2=..).
  • args="*": "args" is var args, corresponds to Python's def meth(*args). Type of the args parameter has to be &PyTuple.
  • kwargs="**": "kwargs" receives keyword arguments, corresponds to Python's def meth(**kwargs). The type of the kwargs parameter has to be Option<&PyDict>.
  • arg="Value": arguments with default value. Corresponds to Python's def meth(arg=Value). If the arg argument is defined after var arguments, it is treated as a keyword-only argument. Note that Value has to be valid rust code, PyO3 just inserts it into the generated code unmodified.

Example:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::{PyDict, PyTuple};

#[pyclass]
struct MyClass {
    num: i32,
    debug: bool,
}
#[pymethods]
impl MyClass {
    #[new]
    #[args(num = "-1", debug = "true")]
    fn new(num: i32, debug: bool) -> Self {
        MyClass { num, debug }
    }

    #[args(
        num = "10",
        debug = "true",
        py_args = "*",
        name = "\"Hello\"",
        py_kwargs = "**"
    )]
    fn method(
        &mut self,
        num: i32,
        debug: bool,
        name: &str,
        py_args: &PyTuple,
        py_kwargs: Option<&PyDict>,
    ) -> PyResult<String> {
        self.debug = debug;
        self.num = num;
        Ok(format!(
            "py_args={:?}, py_kwargs={:?}, name={}, num={}, debug={}",
            py_args, py_kwargs, name, self.num, self.debug
        ))
    }

    fn make_change(&mut self, num: i32, debug: bool) -> PyResult<String> {
        self.num = num;
        self.debug = debug;
        Ok(format!("num={}, debug={}", self.num, self.debug))
    }
}
}

N.B. the position of the "*" argument (if included) controls the system of handling positional and keyword arguments. In Python:

import mymodule

mc = mymodule.MyClass()
print(mc.method(44, False, "World", 666, x=44, y=55))
print(mc.method(num=-1, name="World"))
print(mc.make_change(44, False))
print(mc.make_change(debug=False, num=-1))

Produces output:

py_args=('World', 666), py_kwargs=Some({'x': 44, 'y': 55}), name=Hello, num=44, debug=false
py_args=(), py_kwargs=None, name=World, num=-1, debug=true
num=44, debug=false
num=-1, debug=false

Implementation details

The #[pyclass] macros rely on a lot of conditional code generation: each #[pyclass] can optionally have a #[pymethods] block as well as several different possible #[pyproto] trait implementations.

To support this flexibility the #[pyclass] macro expands to a blob of boilerplate code which sets up the structure for "dtolnay specialization". This implementation pattern enables the Rust compiler to use #[pymethods] and #[pyproto] implementations when they are present, and fall back to default (empty) definitions when they are not.

This simple technique works for the case when there is zero or one implementations. To support multiple #[pymethods] for a #[pyclass] (in the multiple-pymethods feature), a registry mechanism provided by the inventory crate is used instead. This collects impls at library load time, but isn't supported on all platforms. See inventory: how it works for more details.

The #[pyclass] macro expands to roughly the code seen below. The PyClassImplCollector is the type used internally by PyO3 for dtolnay specialization:


#![allow(unused)]
fn main() {
#[cfg(not(feature = "multiple-pymethods"))] {
use pyo3::prelude::*;
// Note: the implementation differs slightly with the `multiple-pymethods` feature enabled.

/// Class for demonstration
struct MyClass {
    num: i32,
    debug: bool,
}

unsafe impl pyo3::PyTypeInfo for MyClass {
    type AsRefTarget = PyCell<Self>;

    const NAME: &'static str = "MyClass";
    const MODULE: Option<&'static str> = None;

    #[inline]
    fn type_object_raw(py: pyo3::Python) -> *mut pyo3::ffi::PyTypeObject {
        use pyo3::type_object::LazyStaticType;
        static TYPE_OBJECT: LazyStaticType = LazyStaticType::new();
        TYPE_OBJECT.get_or_init::<Self>(py)
    }
}

impl pyo3::pyclass::PyClass for MyClass {
    type Dict = pyo3::pyclass_slots::PyClassDummySlot;
    type WeakRef = pyo3::pyclass_slots::PyClassDummySlot;
    type BaseNativeType = PyAny;
}

impl pyo3::IntoPy<PyObject> for MyClass {
    fn into_py(self, py: pyo3::Python) -> pyo3::PyObject {
        pyo3::IntoPy::into_py(pyo3::Py::new(py, self).unwrap(), py)
    }
}

impl pyo3::class::impl_::PyClassImpl for MyClass {
    const DOC: &'static str = "Class for demonstration";
    const IS_GC: bool = false;
    const IS_BASETYPE: bool = false;
    const IS_SUBCLASS: bool = false;
    type Layout = PyCell<MyClass>;
    type BaseType = PyAny;
    type ThreadChecker = pyo3::class::impl_::ThreadCheckerStub<MyClass>;

    fn for_each_method_def(visitor: &mut dyn FnMut(&[pyo3::class::PyMethodDefType])) {
        use pyo3::class::impl_::*;
        let collector = PyClassImplCollector::<MyClass>::new();
        visitor(collector.py_methods());
        visitor(collector.py_class_descriptors());
        visitor(collector.object_protocol_methods());
        visitor(collector.async_protocol_methods());
        visitor(collector.context_protocol_methods());
        visitor(collector.descr_protocol_methods());
        visitor(collector.mapping_protocol_methods());
        visitor(collector.number_protocol_methods());
    }
    fn get_new() -> Option<pyo3::ffi::newfunc> {
        use pyo3::class::impl_::*;
        let collector = PyClassImplCollector::<Self>::new();
        collector.new_impl()
    }
    fn get_alloc() -> Option<pyo3::ffi::allocfunc> {
        use pyo3::class::impl_::*;
        let collector = PyClassImplCollector::<Self>::new();
        collector.alloc_impl()
    }
    fn get_free() -> Option<pyo3::ffi::freefunc> {
        use pyo3::class::impl_::*;
        let collector = PyClassImplCollector::<Self>::new();
        collector.free_impl()
    }
    fn get_call() -> Option<pyo3::ffi::PyCFunctionWithKeywords> {
        use pyo3::class::impl_::*;
        let collector = PyClassImplCollector::<Self>::new();
        collector.call_impl()
    }
    fn for_each_proto_slot(visitor: &mut dyn FnMut(&[pyo3::ffi::PyType_Slot])) {
        // Implementation which uses dtolnay specialization to load all slots.
        use pyo3::class::impl_::*;
        let collector = PyClassImplCollector::<Self>::new();
        visitor(collector.object_protocol_slots());
        visitor(collector.number_protocol_slots());
        visitor(collector.iter_protocol_slots());
        visitor(collector.gc_protocol_slots());
        visitor(collector.descr_protocol_slots());
        visitor(collector.mapping_protocol_slots());
        visitor(collector.sequence_protocol_slots());
        visitor(collector.async_protocol_slots());
        visitor(collector.buffer_protocol_slots());
    }

    fn get_buffer() -> Option<&'static pyo3::class::impl_::PyBufferProcs> {
        use pyo3::class::impl_::*;
        let collector = PyClassImplCollector::<Self>::new();
        collector.buffer_procs()
    }
}
Python::with_gil(|py| {
    let cls = py.get_type::<MyClass>();
    pyo3::py_run!(py, cls, "assert cls.__name__ == 'MyClass'")
});
}
}

Class customizations

PyO3 uses the #[pyproto] attribute in combination with special traits to implement certain protocol (aka __dunder__) methods of Python classes. The special traits are listed in this chapter of the guide. See also the documentation for the pyo3::class module.

Python's object model defines several protocols for different object behavior, such as the sequence, mapping, and number protocols. You may be familiar with implementing these protocols in Python classes by "dunder" methods, such as __str__ or __repr__.

In the Python C-API which PyO3 is dependent upon, many of these protocol methods have to be provided into special "slots" on the class type object. To fill these slots PyO3 uses the #[pyproto] attribute in combination with special traits.

All #[pyproto] methods can return T instead of PyResult<T> if the method implementation is infallible. In addition, if the return type is (), it can be omitted altogether.

There are many "dunder" methods which are not included in any of PyO3's protocol traits, such as __dir__. These methods can be implemented in #[pymethods] as already covered in the previous section.

Basic object customization

The PyObjectProtocol trait provides several basic customizations.

Attribute access

To customize object attribute access, define the following methods:

  • fn __getattr__(&self, name: impl FromPyObject) -> PyResult<impl IntoPy<PyObject>>
  • fn __setattr__(&mut self, name: impl FromPyObject, value: impl FromPyObject) -> PyResult<()>
  • fn __delattr__(&mut self, name: impl FromPyObject) -> PyResult<()>

Each method corresponds to Python's self.attr, self.attr = value and del self.attr code.

String Conversions

  • fn __repr__(&self) -> PyResult<impl ToPyObject<ObjectType=PyString>>

  • fn __str__(&self) -> PyResult<impl ToPyObject<ObjectType=PyString>>

    Possible return types for __str__ and __repr__ are PyResult<String> or PyResult<PyString>.

Comparison operators

  • fn __richcmp__(&self, other: impl FromPyObject, op: CompareOp) -> PyResult<impl ToPyObject>

    Overloads Python comparison operations (==, !=, <, <=, >, and >=). The op argument indicates the comparison operation being performed. The return type will normally be PyResult<bool>, but any Python object can be returned. If other is not of the type specified in the signature, the generated code will automatically return NotImplemented.

  • fn __hash__(&self) -> PyResult<impl PrimInt>

    Objects that compare equal must have the same hash value. The return type must be PyResult<T> where T is one of Rust's primitive integer types.

Other methods

  • fn __bool__(&self) -> PyResult<bool>

    Determines the "truthyness" of the object.

Emulating numeric types

The PyNumberProtocol trait can be implemented to emulate numeric types.

  • fn __add__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __sub__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __mul__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __matmul__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __truediv__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __floordiv__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __mod__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __divmod__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __pow__(lhs: impl FromPyObject, rhs: impl FromPyObject, modulo: Option<impl FromPyObject>) -> PyResult<impl ToPyObject>
  • fn __lshift__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __rshift__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __and__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __or__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __xor__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>

These methods are called to implement the binary arithmetic operations (+, -, *, @, /, //, %, divmod(), pow() and **, <<, >>, &, ^, and |).

If rhs is not of the type specified in the signature, the generated code will automatically return NotImplemented. This is not the case for lhs which must match signature or else raise a TypeError.

The reflected operations are also available:

  • fn __radd__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __rsub__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __rmul__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __rmatmul__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __rtruediv__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __rfloordiv__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __rmod__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __rdivmod__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __rpow__(lhs: impl FromPyObject, rhs: impl FromPyObject, modulo: Option<impl FromPyObject>) -> PyResult<impl ToPyObject>
  • fn __rlshift__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __rrshift__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __rand__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __ror__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>
  • fn __rxor__(lhs: impl FromPyObject, rhs: impl FromPyObject) -> PyResult<impl ToPyObject>

The code generated for these methods expect that all arguments match the signature, or raise a TypeError.

This trait also has support the augmented arithmetic assignments (+=, -=, *=, @=, /=, //=, %=, **=, <<=, >>=, &=, ^=, |=):

  • fn __iadd__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
  • fn __isub__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
  • fn __imul__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
  • fn __imatmul__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
  • fn __itruediv__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
  • fn __ifloordiv__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
  • fn __imod__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
  • fn __ipow__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
  • fn __ilshift__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
  • fn __irshift__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
  • fn __iand__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
  • fn __ior__(&'p mut self, other: impl FromPyObject) -> PyResult<()>
  • fn __ixor__(&'p mut self, other: impl FromPyObject) -> PyResult<()>

The following methods implement the unary arithmetic operations (-, +, abs() and ~):

  • fn __neg__(&'p self) -> PyResult<impl ToPyObject>
  • fn __pos__(&'p self) -> PyResult<impl ToPyObject>
  • fn __abs__(&'p self) -> PyResult<impl ToPyObject>
  • fn __invert__(&'p self) -> PyResult<impl ToPyObject>

Support for coercions:

  • fn __int__(&'p self) -> PyResult<impl ToPyObject>
  • fn __float__(&'p self) -> PyResult<impl ToPyObject>

Other:

  • fn __index__(&'p self) -> PyResult<impl ToPyObject>

Emulating sequential containers (such as lists or tuples)

The PySequenceProtocol trait can be implemented to emulate sequential container types.

For a sequence, the keys are the integers k for which 0 <= k < N, where N is the length of the sequence.

  • fn __len__(&self) -> PyResult<usize>

    Implements the built-in function len() for the sequence.

  • fn __getitem__(&self, idx: isize) -> PyResult<impl ToPyObject>

    Implements evaluation of the self[idx] element. If the idx value is outside the set of indexes for the sequence, IndexError should be raised.

    Note: Negative integer indexes are handled as follows: if __len__() is defined, it is called and the sequence length is used to compute a positive index, which is passed to __getitem__(). If __len__() is not defined, the index is passed as is to the function.

  • fn __setitem__(&mut self, idx: isize, value: impl FromPyObject) -> PyResult<()>

    Implements assignment to the self[idx] element. Same note as for __getitem__(). Should only be implemented if sequence elements can be replaced.

  • fn __delitem__(&mut self, idx: isize) -> PyResult<()>

    Implements deletion of the self[idx] element. Same note as for __getitem__(). Should only be implemented if sequence elements can be deleted.

  • fn __contains__(&self, item: impl FromPyObject) -> PyResult<bool>

    Implements membership test operators. Should return true if item is in self, false otherwise. For objects that don’t define __contains__(), the membership test simply traverses the sequence until it finds a match.

  • fn __concat__(&self, other: impl FromPyObject) -> PyResult<impl ToPyObject>

    Concatenates two sequences. Used by the + operator, after trying the numeric addition via the PyNumberProtocol trait method.

  • fn __repeat__(&self, count: isize) -> PyResult<impl ToPyObject>

    Repeats the sequence count times. Used by the * operator, after trying the numeric multiplication via the PyNumberProtocol trait method.

  • fn __inplace_concat__(&mut self, other: impl FromPyObject) -> PyResult<Self>

    Concatenates two sequences in place. Returns the modified first operand. Used by the += operator, after trying the numeric in place addition via the PyNumberProtocol trait method.

  • fn __inplace_repeat__(&mut self, count: isize) -> PyResult<Self>

    Repeats the sequence count times in place. Returns the modified first operand. Used by the *= operator, after trying the numeric in place multiplication via the PyNumberProtocol trait method.

Emulating mapping containers (such as dictionaries)

The PyMappingProtocol trait allows to emulate mapping container types.

For a mapping, the keys may be Python objects of arbitrary type.

  • fn __len__(&self) -> PyResult<usize>

    Implements the built-in function len() for the mapping.

  • fn __getitem__(&self, key: impl FromPyObject) -> PyResult<impl ToPyObject>

    Implements evaluation of the self[key] element. If key is of an inappropriate type, TypeError may be raised; if key is missing (not in the container), KeyError should be raised.

  • fn __setitem__(&mut self, key: impl FromPyObject, value: impl FromPyObject) -> PyResult<()>

    Implements assignment to the self[key] element or insertion of a new key mapping to value. Should only be implemented if the mapping support changes to the values for keys, or if new keys can be added. The same exceptions should be raised for improper key values as for the __getitem__() method.

  • fn __delitem__(&mut self, key: impl FromPyObject) -> PyResult<()>

    Implements deletion of the self[key] element. Should only be implemented if the mapping supports removal of keys. The same exceptions should be raised for improper key values as for the __getitem__() method.

Garbage Collector Integration

If your type owns references to other Python objects, you will need to integrate with Python's garbage collector so that the GC is aware of those references. To do this, implement the PyGCProtocol trait for your struct. It includes two methods __traverse__ and __clear__. These correspond to the slots tp_traverse and tp_clear in the Python C API. __traverse__ must call visit.call() for each reference to another Python object. __clear__ must clear out any mutable references to other Python objects (thus breaking reference cycles). Immutable references do not have to be cleared, as every cycle must contain at least one mutable reference. Example:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::PyTraverseError;
use pyo3::gc::{PyGCProtocol, PyVisit};

#[pyclass]
struct ClassWithGCSupport {
    obj: Option<PyObject>,
}

#[pyproto]
impl PyGCProtocol for ClassWithGCSupport {
    fn __traverse__(&self, visit: PyVisit) -> Result<(), PyTraverseError> {
        if let Some(obj) = &self.obj {
            visit.call(obj)?
        }
        Ok(())
    }

    fn __clear__(&mut self) {
        // Clear reference, this decrements ref counter.
        self.obj = None;
    }
}
}

Special protocol trait implementations have to be annotated with the #[pyproto] attribute.

It is also possible to enable GC for custom classes using the gc parameter of the pyclass attribute. i.e. #[pyclass(gc)]. In that case instances of custom class participate in Python garbage collection, and it is possible to track them with gc module methods. When using the gc parameter, it is required to implement the PyGCProtocol trait, failure to do so will result in an error at compile time:

#[pyclass(gc)]
struct GCTracked {} // Fails because it does not implement PyGCProtocol

Iterator Types

Iterators can be defined using the PyIterProtocol trait. It includes two methods __iter__ and __next__:

  • fn __iter__(slf: PyRefMut<Self>) -> PyResult<impl IntoPy<PyObject>>
  • fn __next__(slf: PyRefMut<Self>) -> PyResult<Option<impl IntoPy<PyObject>>>

Returning None from __next__ indicates that that there are no further items. These two methods can be take either PyRef<Self> or PyRefMut<Self> as their first argument, so that mutable borrow can be avoided if needed.

Example:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::PyIterProtocol;

#[pyclass]
struct MyIterator {
    iter: Box<Iterator<Item = PyObject> + Send>,
}

#[pyproto]
impl PyIterProtocol for MyIterator {
    fn __iter__(slf: PyRef<Self>) -> PyRef<Self> {
        slf
    }
    fn __next__(mut slf: PyRefMut<Self>) -> Option<PyObject> {
        slf.iter.next()
    }
}
}

In many cases you'll have a distinction between the type being iterated over (i.e. the iterable) and the iterator it provides. In this case, you should implement PyIterProtocol for both the iterable and the iterator, but the iterable only needs to support __iter__() while the iterator must support both __iter__() and __next__(). The default implementations in PyIterProtocol will ensure that the objects behave correctly in Python. For example:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::PyIterProtocol;

#[pyclass]
struct Iter {
    inner: std::vec::IntoIter<usize>,
}

#[pyproto]
impl PyIterProtocol for Iter {
    fn __iter__(slf: PyRef<Self>) -> PyRef<Self> {
        slf
    }

    fn __next__(mut slf: PyRefMut<Self>) -> Option<usize> {
        slf.inner.next()
    }
}

#[pyclass]
struct Container {
    iter: Vec<usize>,
}

#[pyproto]
impl PyIterProtocol for Container {
    fn __iter__(slf: PyRef<Self>) -> PyResult<Py<Iter>> {
        let iter = Iter {
            inner: slf.iter.clone().into_iter(),
        };
        Py::new(slf.py(), iter)
    }
}

Python::with_gil(|py| {
    let container = Container { iter: vec![1, 2, 3, 4] };
    let inst = pyo3::PyCell::new(py, container).unwrap();
    pyo3::py_run!(py, inst, "assert list(inst) == [1, 2, 3, 4]");
    pyo3::py_run!(py, inst, "assert list(iter(iter(inst))) == [1, 2, 3, 4]");
});
}

For more details on Python's iteration protocols, check out the "Iterator Types" section of the library documentation.

Returning a value from iteration

This guide has so far shown how to use Option<T> to implement yielding values during iteration. In Python a generator can also return a value. To express this in Rust, PyO3 provides the IterNextOutput enum to both Yield values and Return a final value - see its docs for further details and an example.

Type Conversions

In this portion of the guide we'll talk about the mapping of Python types to Rust types offered by PyO3, as well as the traits available to perform conversions between them.

Mapping of Rust types to Python types

When writing functions callable from Python (such as a #[pyfunction] or in a #[pymethods] block), the trait FromPyObject is required for function arguments, and IntoPy<PyObject> is required for function return values.

Consult the tables in the following section to find the Rust types provided by PyO3 which implement these traits.

Argument Types

When accepting a function argument, it is possible to either use Rust library types or PyO3's Python-native types. (See the next section for discussion on when to use each.)

The table below contains the Python type and the corresponding function argument types that will accept them:

PythonRustRust (Python-native)
object-&PyAny
strString, Cow<str>, &str, OsString, PathBuf&PyUnicode
bytesVec<u8>, &[u8]&PyBytes
boolbool&PyBool
intAny integer type (i32, u32, usize, etc)&PyLong
floatf32, f64&PyFloat
complexnum_complex::Complex1&PyComplex
list[T]Vec<T>&PyList
dict[K, V]HashMap<K, V>, BTreeMap<K, V>, hashbrown::HashMap<K, V>2, indexmap::IndexMap<K, V>3&PyDict
tuple[T, U](T, U), Vec<T>&PyTuple
set[T]HashSet<T>, BTreeSet<T>, hashbrown::HashSet<T>2&PySet
frozenset[T]HashSet<T>, BTreeSet<T>, hashbrown::HashSet<T>2&PyFrozenSet
bytearrayVec<u8>&PyByteArray
slice-&PySlice
type-&PyType
module-&PyModule
datetime.datetime-&PyDateTime
datetime.date-&PyDate
datetime.time-&PyTime
datetime.tzinfo-&PyTzInfo
datetime.timedelta-&PyDelta
typing.Optional[T]Option<T>-
typing.Sequence[T]Vec<T>&PySequence
typing.Iterator[Any]-&PyIterator
typing.Union[...]See #[derive(FromPyObject)]-

There are also a few special types related to the GIL and Rust-defined #[pyclass]es which may come in useful:

WhatDescription
PythonA GIL token, used to pass to PyO3 constructors to prove ownership of the GIL
Py<T>A Python object isolated from the GIL lifetime. This can be sent to other threads.
PyObjectAn alias for Py<PyAny>
&PyCell<T>A #[pyclass] value owned by Python.
PyRef<T>A #[pyclass] borrowed immutably.
PyRefMut<T>A #[pyclass] borrowed mutably.

For more detail on accepting #[pyclass] values as function arguments, see the section of this guide on Python Classes.

Using Rust library types vs Python-native types

Using Rust library types as function arguments will incur a conversion cost compared to using the Python-native types. Using the Python-native types is almost zero-cost (they just require a type check similar to the Python builtin function isinstance()).

However, once that conversion cost has been paid, the Rust standard library types offer a number of benefits:

  • You can write functionality in native-speed Rust code (free of Python's runtime costs).
  • You get better interoperability with the rest of the Rust ecosystem.
  • You can use Python::allow_threads to release the Python GIL and let other Python threads make progress while your Rust code is executing.
  • You also benefit from stricter type checking. For example you can specify Vec<i32>, which will only accept a Python list containing integers. The Python-native equivalent, &PyList, would accept a Python list containing Python objects of any type.

For most PyO3 usage the conversion cost is worth paying to get these benefits. As always, if you're not sure it's worth it in your case, benchmark it!

Returning Rust values to Python

When returning values from functions callable from Python, Python-native types (&PyAny, &PyDict etc.) can be used with zero cost.

Because these types are references, in some situations the Rust compiler may ask for lifetime annotations. If this is the case, you should use Py<PyAny>, Py<PyDict> etc. instead - which are also zero-cost. For all of these Python-native types T, Py<T> can be created from T with an .into() conversion.

If your function is fallible, it should return PyResult<T> or Result<T, E> where E implements From<E> for PyErr. This will raise a Python exception if the Err variant is returned.

Finally, the following Rust types are also able to convert to Python as return values:

Rust typeResulting Python Type
Stringstr
&strstr
boolbool
Any integer type (i32, u32, usize, etc)int
f32, f64float
Option<T>Optional[T]
(T, U)Tuple[T, U]
Vec<T>List[T]
HashMap<K, V>Dict[K, V]
BTreeMap<K, V>Dict[K, V]
HashSet<T>Set[T]
BTreeSet<T>Set[T]
&PyCell<T: PyClass>T
PyRef<T: PyClass>T
PyRefMut<T: PyClass>T
1

Requires the num-complex optional feature.

2

Requires the hashbrown optional feature.

3

Requires the indexmap optional feature.

Conversion traits

PyO3 provides some handy traits to convert between Python types and Rust types.

.extract() and the FromPyObject trait

The easiest way to convert a Python object to a Rust value is using .extract(). It returns a PyResult with a type error if the conversion fails, so usually you will use something like

let v: Vec<i32> = obj.extract()?;

This method is available for many Python object types, and can produce a wide variety of Rust types, which you can check out in the implementor list of FromPyObject.

FromPyObject is also implemented for your own Rust types wrapped as Python objects (see the chapter about classes). There, in order to both be able to operate on mutable references and satisfy Rust's rules of non-aliasing mutable references, you have to extract the PyO3 reference wrappers PyRef and PyRefMut. They work like the reference wrappers of std::cell::RefCell and ensure (at runtime) that Rust borrows are allowed.

Deriving FromPyObject

FromPyObject can be automatically derived for many kinds of structs and enums if the member types themselves implement FromPyObject. This even includes members with a generic type T: FromPyObject. Derivation for empty enums, enum variants and structs is not supported.

Deriving FromPyObject for structs

use pyo3::prelude::*;

#[derive(FromPyObject)]
struct RustyStruct {
    my_string: String,
}

The derivation generates code that will per default access the attribute my_string on the Python object, i.e. obj.getattr("my_string"), and call extract() on the attribute. It is also possible to access the value on the Python object through obj.get_item("my_string") by setting the attribute pyo3(item) on the field:

use pyo3::prelude::*;

#[derive(FromPyObject)]
struct RustyStruct {
    #[pyo3(item)]
    my_string: String,
}

The argument passed to getattr and get_item can also be configured:

use pyo3::prelude::*;

#[derive(FromPyObject)]
struct RustyStruct {
    #[pyo3(item("key"))]
    string_in_mapping: String,
    #[pyo3(attribute("name"))]
    string_attr: String,
}

This tries to extract string_attr from the attribute name and string_in_mapping from a mapping with the key "key". The arguments for attribute are restricted to non-empty string literals while item can take any valid literal that implements ToBorrowedObject.

Deriving FromPyObject for tuple structs

Tuple structs are also supported but do not allow customizing the extraction. The input is always assumed to be a Python tuple with the same length as the Rust type, the nth field is extracted from the nth item in the Python tuple.

use pyo3::prelude::*;

#[derive(FromPyObject)]
struct RustyTuple(String, String);

Tuple structs with a single field are treated as wrapper types which are described in the following section. To override this behaviour and ensure that the input is in fact a tuple, specify the struct as

use pyo3::prelude::*;

#[derive(FromPyObject)]
struct RustyTuple((String,));

Deriving FromPyObject for wrapper types

The pyo3(transparent) attribute can be used on structs with exactly one field. This results in extracting directly from the input object, i.e. obj.extract(), rather than trying to access an item or attribute. This behaviour is enabled per default for newtype structs and tuple-variants with a single field.

use pyo3::prelude::*;

#[derive(FromPyObject)]
struct RustyTransparentTupleStruct(String);

#[derive(FromPyObject)]
#[pyo3(transparent)]
struct RustyTransparentStruct {
    inner: String,
}

Deriving FromPyObject for enums

The FromPyObject derivation for enums generates code that tries to extract the variants in the order of the fields. As soon as a variant can be extracted succesfully, that variant is returned. This makes it possible to extract Python types like Union[str, int].

The same customizations and restrictions described for struct derivations apply to enum variants, i.e. a tuple variant assumes that the input is a Python tuple, and a struct variant defaults to extracting fields as attributes but can be configured in the same manner. The transparent attribute can be applied to single-field-variants.

use pyo3::prelude::*;

#[derive(FromPyObject)]
enum RustyEnum<'a> {
    Int(usize), // input is a positive int
    String(String), // input is a string
    IntTuple(usize, usize), // input is a 2-tuple with positive ints
    StringIntTuple(String, usize), // input is a 2-tuple with String and int
    Coordinates3d { // needs to be in front of 2d
        x: usize,
        y: usize,
        z: usize,
    },
    Coordinates2d { // only gets checked if the input did not have `z`
        #[pyo3(attribute("x"))]
        a: usize,
        #[pyo3(attribute("y"))]
        b: usize,
    },
    #[pyo3(transparent)]
    CatchAll(&'a PyAny), // This extraction never fails
}

If none of the enum variants match, a PyValueError containing the names of the tested variants is returned. The names reported in the error message can be customized through the pyo3(annotation = "name") attribute, e.g. to use conventional Python type names:

use pyo3::prelude::*;

#[derive(FromPyObject)]
enum RustyEnum {
    #[pyo3(transparent, annotation = "str")]
    String(String),
    #[pyo3(transparent, annotation = "int")]
    Int(isize),
}

If the input is neither a string nor an integer, the error message will be: "'<INPUT_TYPE>' cannot be converted to 'Union[str, int]'".

#[derive(FromPyObject)] Container Attributes

  • pyo3(transparent)
    • extract the field directly from the object as obj.extract() instead of get_item() or getattr()
    • Newtype structs and tuple-variants are treated as transparent per default.
    • only supported for single-field structs and enum variants
  • pyo3(annotation = "name")
    • changes the name of the failed variant in the generated error message in case of failure.
    • e.g. pyo3("int") reports the variant's type as int.
    • only supported for enum variants

#[derive(FromPyObject)] Field Attributes

  • pyo3(attribute), pyo3(attribute("name"))
    • retrieve the field from an attribute, possibly with a custom name specified as an argument
    • argument must be a string-literal.
  • pyo3(item), pyo3(item("key"))
    • retrieve the field from a mapping, possibly with the custom key specified as an argument.
    • can be any literal that implements ToBorrowedObject

IntoPy<T>

This trait defines the to-python conversion for a Rust type. It is usually implemented as IntoPy<PyObject>, which is the trait needed for returning a value from #[pyfunction] and #[pymethods].

All types in PyO3 implement this trait, as does a #[pyclass] which doesn't use extends.

Occasionally you may choose to implement this for custom types which are mapped to Python types without having a unique python type.

use pyo3::prelude::*;

struct MyPyObjectWrapper(PyObject);

impl IntoPy<PyObject> for MyPyObjectWrapper {
    fn into_py(self, py: Python) -> PyObject {
        self.0
    }
}

The ToPyObject trait

ToPyObject is a conversion trait that allows various objects to be converted into PyObject. IntoPy<PyObject> serves the same purpose, except that it consumes self.

Python Exceptions

Defining a new exception

You can use the create_exception! macro to define a new exception type:


#![allow(unused)]
fn main() {
use pyo3::create_exception;

create_exception!(module, MyError, pyo3::exceptions::PyException);
}
  • module is the name of the containing module.
  • MyError is the name of the new exception type.

For example:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::create_exception;
use pyo3::types::IntoPyDict;
use pyo3::exceptions::PyException;

create_exception!(mymodule, CustomError, PyException);

Python::with_gil(|py| {
    let ctx = [("CustomError", py.get_type::<CustomError>())].into_py_dict(py);
    pyo3::py_run!(py, *ctx, "assert str(CustomError) == \"<class 'mymodule.CustomError'>\"");
    pyo3::py_run!(py, *ctx, "assert CustomError('oops').args == ('oops',)");
});
}

When using PyO3 to create an extension module, you can add the new exception to the module like this, so that it is importable from Python:


create_exception!(mymodule, CustomError, PyException);

#[pymodule]
fn mymodule(py: Python, m: &PyModule) -> PyResult<()> {
    // ... other elements added to module ...
    m.add("CustomError", py.get_type::<CustomError>())?;

    Ok(())
}

Raising an exception

To raise an exception, first you need to obtain an exception type and construct a new PyErr, then call the PyErr::restore method to write the exception back to the Python interpreter's global state.


#![allow(unused)]
fn main() {
use pyo3::{Python, PyErr};
use pyo3::exceptions::PyTypeError;

Python::with_gil(|py| {
    PyTypeError::new_err("Error").restore(py);
    assert!(PyErr::occurred(py));
    drop(PyErr::fetch(py));
});
}

From pyfunctions and pyclass methods, returning an Err(PyErr) is enough; PyO3 will handle restoring the exception on the Python interpreter side.

If you already have a Python exception instance, you can simply call PyErr::from_instance.

PyErr::from_instance(py, err).restore(py);

If a Rust type exists for the exception, then it is possible to use the new_err method. For example, each standard exception defined in the pyo3::exceptions module has a corresponding Rust type, exceptions defined by create_exception! and import_exception! macro have Rust types as well.


#![allow(unused)]
fn main() {
use pyo3::exceptions::PyValueError;
use pyo3::prelude::*;
fn check_for_error() -> bool {false}
fn my_func(arg: PyObject) -> PyResult<()> {
    if check_for_error() {
        Err(PyValueError::new_err("argument is wrong"))
    } else {
        Ok(())
    }
}
}

Checking exception types

Python has an isinstance method to check an object's type. In PyO3 every native type has access to the PyAny::is_instance method which does the same thing.


#![allow(unused)]
fn main() {
use pyo3::Python;
use pyo3::types::{PyBool, PyList};

Python::with_gil(|py| {
    assert!(PyBool::new(py, true).is_instance::<PyBool>().unwrap());
    let list = PyList::new(py, &[1, 2, 3, 4]);
    assert!(!list.is_instance::<PyBool>().unwrap());
    assert!(list.is_instance::<PyList>().unwrap());
});
}

PyAny::is_instance calls the underlying PyType::is_instance method to do the actual work.

To check the type of an exception, you can similarly do:


#![allow(unused)]
fn main() {
use pyo3::exceptions::PyTypeError;
use pyo3::prelude::*;
Python::with_gil(|py| {
let err = PyTypeError::new_err(());
err.is_instance::<PyTypeError>(py);
});
}

Handling Rust errors

The vast majority of operations in this library will return PyResult<T>, which is an alias for the type Result<T, PyErr>.

A PyErr represents a Python exception. Errors within the PyO3 library are also exposed as Python exceptions.

If your code has a custom error type e.g. MyError, adding an implementation of std::convert::From<MyError> for PyErr is usually enough. PyO3 will then automatically convert your error to a Python exception when needed.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::exceptions::PyOSError;
use std::error::Error;
use std::fmt;

#[derive(Debug)]
struct CustomIOError;

impl Error for CustomIOError {}

impl fmt::Display for CustomIOError {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        write!(f, "Oh no!")
    }
}

fn bind(_addr: &str) -> Result<(), CustomIOError> {
    Err(CustomIOError)
}
impl std::convert::From<CustomIOError> for PyErr {
    fn from(err: CustomIOError) -> PyErr {
        PyOSError::new_err(err.to_string())
    }
}

#[pyfunction]
fn connect(s: String) -> Result<bool, CustomIOError> {
    bind("127.0.0.1:80")?;
    Ok(true)
}
}

The code snippet above will raise an OSError in Python if bind() returns a CustomIOError.

The std::convert::From<T> trait is implemented for most of the Rust standard library's error types so the ? operator can be used.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

fn parse_int(s: String) -> PyResult<usize> {
    Ok(s.parse::<usize>()?)
}
}

The code snippet above will raise a ValueError in Python if String::parse() returns an error.

If lazy construction of the Python exception instance is desired, the PyErrArguments trait can be implemented. In that case, actual exception argument creation is delayed until the PyErr is needed.

Using exceptions defined in Python code

It is possible to use an exception defined in Python code as a native Rust type. The import_exception! macro allows importing a specific exception class and defines a Rust type for that exception.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

mod io {
    pyo3::import_exception!(io, UnsupportedOperation);
}

fn tell(file: &PyAny) -> PyResult<u64> {
    use pyo3::exceptions::*;

    match file.call_method0("tell") {
        Err(_) => Err(io::UnsupportedOperation::new_err("not supported: tell")),
        Ok(x) => x.extract::<u64>(),
    }
}

}

pyo3::exceptions defines exceptions for several standard library modules.

Calling Python in Rust code

This chapter of the guide documents some ways to interact with Python code from Rust:

  • How to call Python functions
  • How to execute existing Python code

Calling Python functions

Any Python-native object reference (such as &PyAny, &PyList, or &PyCell<MyClass>) can be used to call Python functions.

PyO3 offers two APIs to make function calls:

  • call - call any callable Python object.
  • call_method - call a method on the Python object.

Both of these APIs take args and kwargs arguments (for positional and keyword arguments respectively). There are variants for less complex calls:

For convenience the Py<T> smart pointer also exposes these same six API methods, but needs a Python token as an additional first argument to prove the GIL is held.

The example below shows a calling Python functions behind a PyObject (aka Py<PyAny>) reference:

use pyo3::prelude::*;
use pyo3::types::{PyDict, PyTuple};

struct SomeObject;
impl SomeObject {
    fn new(py: Python) -> PyObject {
        PyDict::new(py).to_object(py)
    }
}

fn main() {
    let arg1 = "arg1";
    let arg2 = "arg2";
    let arg3 = "arg3";

    Python::with_gil(|py| {
        let obj = SomeObject::new(py);

        // call object without empty arguments
        obj.call0(py);

        // call object with PyTuple
        let args = PyTuple::new(py, &[arg1, arg2, arg3]);
        obj.call1(py, args);

        // pass arguments as rust tuple
        let args = (arg1, arg2, arg3);
        obj.call1(py, args);
    });
}

Creating keyword arguments

For the call and call_method APIs, kwargs can be None or Some(&PyDict). You can use the IntoPyDict trait to convert other dict-like containers, e.g. HashMap or BTreeMap, as well as tuples with up to 10 elements and Vecs where each element is a two-element tuple.

use pyo3::prelude::*;
use pyo3::types::{IntoPyDict, PyDict};
use std::collections::HashMap;

struct SomeObject;

impl SomeObject {
    fn new(py: Python) -> PyObject {
        PyDict::new(py).to_object(py)
    }
}

fn main() {
    let key1 = "key1";
    let val1 = 1;
    let key2 = "key2";
    let val2 = 2;

    Python::with_gil(|py| {
        let obj = SomeObject::new(py);

        // call object with PyDict
        let kwargs = [(key1, val1)].into_py_dict(py);
        obj.call(py, (), Some(kwargs));

        // pass arguments as Vec
        let kwargs = vec![(key1, val1), (key2, val2)];
        obj.call(py, (), Some(kwargs.into_py_dict(py)));

        // pass arguments as HashMap
        let mut kwargs = HashMap::<&str, i32>::new();
        kwargs.insert(key1, 1);
        obj.call(py, (), Some(kwargs.into_py_dict(py)));
   });
}

Executing existing Python code

If you already have some existing Python code that you need to execute from Rust, the following FAQs can help you select the right PyO3 functionality for your situation:

Want to access Python APIs? Then use PyModule::import.

Pymodule::import can be used to get handle to a Python module from Rust. You can use this to import and use any Python module available in your environment.

use pyo3::prelude::*;

fn main() -> PyResult<()> {
    Python::with_gil(|py| {
        let builtins = PyModule::import(py, "builtins")?;
        let total: i32 = builtins.getattr("sum")?.call1((vec![1, 2, 3],))?.extract()?;
        assert_eq!(total, 6);
        Ok(())
    })
}

Want to run just an expression? Then use eval.

Python::eval is a method to execute a Python expression and return the evaluated value as a &PyAny object.

use pyo3::prelude::*;
use pyo3::types::IntoPyDict;

fn main() -> Result<(), ()> {
Python::with_gil(|py| {
    let result = py.eval("[i * 10 for i in range(5)]", None, None).map_err(|e| {
        e.print_and_set_sys_last_vars(py);
    })?;
    let res: Vec<i64> = result.extract().unwrap();
    assert_eq!(res, vec![0, 10, 20, 30, 40]);
    Ok(())
})
}

Want to run statements? Then use run.

Python::run is a method to execute one or more Python statements. This method returns nothing (like any Python statement), but you can get access to manipulated objects via the locals dict.

You can also use the py_run! macro, which is a shorthand for Python::run. Since py_run! panics on exceptions, we recommend you use this macro only for quickly testing your Python extensions.

use pyo3::prelude::*;
use pyo3::{PyCell, PyObjectProtocol, py_run};

fn main() {
#[pyclass]
struct UserData {
    id: u32,
    name: String,
}

#[pymethods]
impl UserData {
    fn as_tuple(&self) -> (u32, String) {
        (self.id, self.name.clone())
    }
}

#[pyproto]
impl PyObjectProtocol for UserData {
    fn __repr__(&self) -> PyResult<String> {
        Ok(format!("User {}(id: {})", self.name, self.id))
    }
}

Python::with_gil(|py| {
    let userdata = UserData {
        id: 34,
        name: "Yu".to_string(),
    };
    let userdata = PyCell::new(py, userdata).unwrap();
    let userdata_as_tuple = (34, "Yu");
    py_run!(py, userdata userdata_as_tuple, r#"
assert repr(userdata) == "User Yu(id: 34)"
assert userdata.as_tuple() == userdata_as_tuple
    "#);
})
}

You have a Python file or code snippet? Then use PyModule::from_code.

PyModule::from_code can be used to generate a Python module which can then be used just as if it was imported with PyModule::import.

use pyo3::{prelude::*, types::{IntoPyDict, PyModule}};

fn main() -> PyResult<()> {
Python::with_gil(|py| {
    let activators = PyModule::from_code(py, r#"
def relu(x):
    """see https://en.wikipedia.org/wiki/Rectifier_(neural_networks)"""
    return max(0.0, x)

def leaky_relu(x, slope=0.01):
    return x if x >= 0 else x * slope
    "#, "activators.py", "activators")?;

    let relu_result: f64 = activators.getattr("relu")?.call1((-1.0,))?.extract()?;
    assert_eq!(relu_result, 0.0);

    let kwargs = [("slope", 0.2)].into_py_dict(py);
    let lrelu_result: f64 = activators
        .getattr("leaky_relu")?.call((-1.0,), Some(kwargs))?
        .extract()?;
    assert_eq!(lrelu_result, -0.2);
   Ok(())
})
}

Need to use a context manager from Rust?

Use context managers by directly invoking __enter__ and __exit__.

use pyo3::prelude::*;
use pyo3::types::PyModule;

fn main() {
    Python::with_gil(|py| {
        let custom_manager = PyModule::from_code(py, r#"
class House(object):
    def __init__(self, address):
        self.address = address
    def __enter__(self):
        print(f"Welcome to {self.address}!")
    def __exit__(self, type, value, traceback):
        if type:
            print(f"Sorry you had {type} trouble at {self.address}")
        else:
            print(f"Thank you for visiting {self.address}, come again soon!")

        "#, "house.py", "house").unwrap();

        let house_class = custom_manager.getattr("House").unwrap();
        let house = house_class.call1(("123 Main Street",)).unwrap();

        house.call_method0("__enter__").unwrap();

        let result = py.eval("undefined_variable + 1", None, None);

        // If the eval threw an exception we'll pass it through to the context manager.
        // Otherwise, __exit__  is called with empty arguments (Python "None").
        match result {
            Ok(_) => {
                let none = py.None();
                house.call_method1("__exit__", (&none, &none, &none)).unwrap();
            },
            Err(e) => {
                house.call_method1(
                    "__exit__",
                    (e.ptype(py), e.pvalue(py), e.ptraceback(py))
                ).unwrap();
            }
        }
    })
}

GIL lifetimes, mutability and Python object types

On first glance, PyO3 provides a huge number of different types that can be used to wrap or refer to Python objects. This page delves into the details and gives an overview of their intended meaning, with examples when each type is best used.

Mutability and Rust types

Since Python has no concept of ownership, and works solely with boxed objects, any Python object can be referenced any number of times, and mutation is allowed from any reference.

The situation is helped a little by the Global Interpreter Lock (GIL), which ensures that only one thread can use the Python interpreter and its API at the same time, while non-Python operations (system calls and extension code) can unlock the GIL. (See the section on parallelism for how to do that in PyO3.)

In PyO3, holding the GIL is modeled by acquiring a token of the type Python<'py>, which serves three purposes:

  • It provides some global API for the Python interpreter, such as eval.
  • It can be passed to functions that require a proof of holding the GIL, such as Py::clone_ref.
  • Its lifetime can be used to create Rust references that implicitly guarantee holding the GIL, such as &'py PyAny.

The latter two points are the reason why some APIs in PyO3 require the py: Python argument, while others don't.

The PyO3 API for Python objects is written such that instead of requiring a mutable Rust reference for mutating operations such as PyList::append, a shared reference (which, in turn, can only be created through Python<'_> with a GIL lifetime) is sufficient.

However, Rust structs wrapped as Python objects (called pyclass types) usually do need &mut access. Due to the GIL, PyO3 can guarantee thread-safe acces to them, but it cannot statically guarantee uniqueness of &mut references once an object's ownership has been passed to the Python interpreter, ensuring references is done at runtime using PyCell, a scheme very similar to std::cell::RefCell.

Object types

PyAny

Represents: a Python object of unspecified type, restricted to a GIL lifetime. Currently, PyAny can only ever occur as a reference, &PyAny.

Used: Whenever you want to refer to some Python object and will have the GIL for the whole duration you need to access that object. For example, intermediate values and arguments to pyfunctions or pymethods implemented in Rust where any type is allowed.

Many general methods for interacting with Python objects are on the PyAny struct, such as getattr, setattr, and .call.

Conversions:

For a &PyAny object reference any where the underlying object is a Python-native type such as a list:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::{Py, Python, PyAny, PyResult, types::PyList};
Python::with_gil(|py| -> PyResult<()> {
let obj: &PyAny = PyList::empty(py);

// To &PyList with PyAny::downcast
let _: &PyList = obj.downcast()?;

// To Py<PyAny> (aka PyObject) with .into()
let _: Py<PyAny> = obj.into();

// To Py<PyList> with PyAny::extract
let _: Py<PyList> = obj.extract()?;
Ok(())
}).unwrap();
}

For a &PyAny object reference any where the underlying object is a #[pyclass]:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::{Py, Python, PyAny, PyResult, types::PyList};
#[pyclass] #[derive(Clone)] struct MyClass { }
Python::with_gil(|py| -> PyResult<()> {
let obj: &PyAny = Py::new(py, MyClass { })?.into_ref(py);

// To &PyCell<MyClass> with PyAny::downcast
let _: &PyCell<MyClass> = obj.downcast()?;

// To Py<PyAny> (aka PyObject) with .into()
let _: Py<PyAny> = obj.into();

// To Py<MyClass> with PyAny::extract
let _: Py<MyClass> = obj.extract()?;

// To MyClass with PyAny::extract, if MyClass: Clone
let _: MyClass = obj.extract()?;

// To PyRef<MyClass> or PyRefMut<MyClass> with PyAny::extract
let _: PyRef<MyClass> = obj.extract()?;
let _: PyRefMut<MyClass> = obj.extract()?;
Ok(())
}).unwrap();
}

PyTuple, PyDict, and many more

Represents: a native Python object of known type, restricted to a GIL lifetime just like PyAny.

Used: Whenever you want to operate with native Python types while holding the GIL. Like PyAny, this is the most convenient form to use for function arguments and intermediate values.

These types all implement Deref<Target = PyAny>, so they all expose the same methods which can be found on PyAny.

To see all Python types exposed by PyO3 you should consult the pyo3::types module.

Conversions:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::PyList;
Python::with_gil(|py| -> PyResult<()> {
let list = PyList::empty(py);

// Use methods from PyAny on all Python types with Deref implementation
let _ = list.repr()?;

// To &PyAny automatically with Deref implementation
let _: &PyAny = list;

// To &PyAny explicitly with .as_ref()
let _: &PyAny = list.as_ref();

// To Py<T> with .into() or Py::from()
let _: Py<PyList> = list.into();

// To PyObject with .into() or .to_object(py)
let _: PyObject = list.into();
Ok(())
}).unwrap();
}

Py<T> and PyObject

Represents: a GIL-independent reference to a Python object. This can be a Python native type (like PyTuple), or a pyclass type implemented in Rust. The most commonly-used variant, Py<PyAny>, is also known as PyObject.

Used: Whenever you want to carry around references to a Python object without caring about a GIL lifetime. For example, storing Python object references in a Rust struct that outlives the Python-Rust FFI boundary, or returning objects from functions implemented in Rust back to Python.

Can be cloned using Python reference counts with .clone().

Conversions:

For a Py<PyList>, the conversions are as below:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::PyList;
Python::with_gil(|py| {
let list: Py<PyList> = PyList::empty(py).into();

// To &PyList with Py::as_ref() (borrows from the Py)
let _: &PyList = list.as_ref(py);

let list_clone = list.clone(); // Because `.into_ref()` will consume `list`.
// To &PyList with Py::into_ref() (moves the pointer into PyO3's object storage)
let _: &PyList = list.into_ref(py);

let list = list_clone;
// To Py<PyAny> (aka PyObject) with .into()
let _: Py<PyAny> = list.into();
})
}

For a #[pyclass] struct MyClass, the conversions for Py<MyClass> are below:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
Python::with_gil(|py| {
#[pyclass] struct MyClass { }
Python::with_gil(|py| -> PyResult<()> {
let my_class: Py<MyClass> = Py::new(py, MyClass { })?;

// To &PyCell<MyClass> with Py::as_ref() (borrows from the Py)
let _: &PyCell<MyClass> = my_class.as_ref(py);

let my_class_clone = my_class.clone(); // Because `.into_ref()` will consume `my_class`.
// To &PyCell<MyClass> with Py::into_ref() (moves the pointer into PyO3's object storage)
let _: &PyCell<MyClass> = my_class.into_ref(py);

let my_class = my_class_clone.clone();
// To Py<PyAny> (aka PyObject) with .into_py(py)
let _: Py<PyAny> = my_class.into_py(py);

let my_class = my_class_clone;
// To PyRef<MyClass> with Py::borrow or Py::try_borrow
let _: PyRef<MyClass> = my_class.try_borrow(py)?;

// To PyRefMut<MyClass> with Py::borrow_mut or Py::try_borrow_mut
let _: PyRefMut<MyClass> = my_class.try_borrow_mut(py)?;
Ok(())
}).unwrap();
});
}

PyCell<SomeType>

Represents: a reference to a Rust object (instance of PyClass) which is wrapped in a Python object. The cell part is an analog to stdlib's RefCell to allow access to &mut references.

Used: for accessing pure-Rust API of the instance (members and functions taking &SomeType or &mut SomeType) while maintaining the aliasing rules of Rust references.

Like pyo3's Python native types, PyCell<T> implements Deref<Target = PyAny>, so it also exposes all of the methods on PyAny.

Conversions:

PyCell<T> can be used to access &T and &mut T via PyRef<T> and PyRefMut<T> respectively.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::PyList;
#[pyclass] struct MyClass { }
Python::with_gil(|py| -> PyResult<()> {
let cell: &PyCell<MyClass> = PyCell::new(py, MyClass { })?;

// To PyRef<T> with .borrow() or .try_borrow()
let py_ref: PyRef<MyClass> = cell.try_borrow()?;
let _: &MyClass = &*py_ref;
drop(py_ref);

// To PyRefMut<T> with .borrow_mut() or .try_borrow_mut()
let mut py_ref_mut: PyRefMut<MyClass> = cell.try_borrow_mut()?;
let _: &mut MyClass = &mut *py_ref_mut;
Ok(())
}).unwrap();
}

PyCell<T> can also be accessed like a Python-native type.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::PyList;
#[pyclass] struct MyClass { }
Python::with_gil(|py| -> PyResult<()> {
let cell: &PyCell<MyClass> = PyCell::new(py, MyClass { })?;

// Use methods from PyAny on PyCell<T> with Deref implementation
let _ = cell.repr()?;

// To &PyAny automatically with Deref implementation
let _: &PyAny = cell;

// To &PyAny explicitly with .as_ref()
let _: &PyAny = cell.as_ref();
Ok(())
}).unwrap();
}

PyRef<SomeType> and PyRefMut<SomeType>

Represents: reference wrapper types employed by PyCell to keep track of borrows, analog to Ref and RefMut used by RefCell.

Used: while borrowing a PyCell. They can also be used with .extract() on types like Py<T> and PyAny to get a reference quickly.

PyClass

This trait marks structs defined in Rust that are also usable as Python classes, usually defined using the #[pyclass] macro.

PyNativeType

This trait marks structs that mirror native Python types, such as PyList.

Parallelism

CPython has the infamous Global Interpreter Lock, which prevents several threads from executing Python bytecode in parallel. This makes threading in Python a bad fit for CPU-bound tasks and often forces developers to accept the overhead of multiprocessing.

In PyO3 parallelism can be easily achieved in Rust-only code. Let's take a look at our word-count example, where we have a search function that utilizes the rayon crate to count words in parallel.

#[pyfunction]
fn search(contents: &str, needle: &str) -> usize {
    contents
        .par_lines()
        .map(|line| count_line(line, needle))
        .sum()
}

But let's assume you have a long running Rust function which you would like to execute several times in parallel. For the sake of example let's take a sequential version of the word count:

fn search_sequential(contents: &str, needle: &str) -> usize {
    contents.lines().map(|line| count_line(line, needle)).sum()
}

To enable parallel execution of this function, the Python::allow_threads method can be used to temporarily release the GIL, thus allowing other Python threads to run. We then have a function exposed to the Python runtime which calls search_sequential inside a closure passed to Python::allow_threads to enable true parallelism:

#[pyfunction]
fn search_sequential_allow_threads(py: Python, contents: &str, needle: &str) -> usize {
    py.allow_threads(|| search_sequential(contents, needle))
}

Now Python threads can use more than one CPU core, resolving the limitation which usually makes multi-threading in Python only good for IO-bound tasks:

from concurrent.futures import ThreadPoolExecutor
from word_count import search_sequential_allow_threads

executor = ThreadPoolExecutor(max_workers=2)

future_1 = executor.submit(
    word_count.search_sequential_allow_threads, contents, needle
)
future_2 = executor.submit(
    word_count.search_sequential_allow_threads, contents, needle
)
result_1 = future_1.result()
result_2 = future_2.result()

Benchmark

Let's benchmark the word-count example to verify that we really did unlock parallelism with PyO3.

We are using pytest-benchmark to benchmark four word count functions:

  1. Pure Python version
  2. Rust parallel version
  3. Rust sequential version
  4. Rust sequential version executed twice with two Python threads

The benchmark script can be found here, and we can run tox in the word-count folder to benchmark these functions.

While the results of the benchmark of course depend on your machine, the relative results should be similar to this (mid 2020):

-------------------------------------------------------------------------------------------------- benchmark: 4 tests -------------------------------------------------------------------------------------------------
Name (time in ms)                                          Min                Max               Mean            StdDev             Median               IQR            Outliers       OPS            Rounds  Iterations
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_word_count_rust_parallel                           1.7315 (1.0)       4.6495 (1.0)       1.9972 (1.0)      0.4299 (1.0)       1.8142 (1.0)      0.2049 (1.0)         40;46  500.6943 (1.0)         375           1
test_word_count_rust_sequential                         7.3348 (4.24)     10.3556 (2.23)      8.0035 (4.01)     0.7785 (1.81)      7.5597 (4.17)     0.8641 (4.22)         26;5  124.9457 (0.25)        121           1
test_word_count_rust_sequential_twice_with_threads      7.9839 (4.61)     10.3065 (2.22)      8.4511 (4.23)     0.4709 (1.10)      8.2457 (4.55)     0.3927 (1.92)        17;17  118.3274 (0.24)        114           1
test_word_count_python_sequential                      27.3985 (15.82)    45.4527 (9.78)     28.9604 (14.50)    4.1449 (9.64)     27.5781 (15.20)    0.4638 (2.26)          3;5   34.5299 (0.07)         35           1
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

You can see that the Python threaded version is not much slower than the Rust sequential version, which means compared to an execution on a single CPU core the speed has doubled.

Debugging

Macros

PyO3's attributes (#[pyclass], #[pymodule], etc.) are procedural macros, which means that they rewrite the source of the annotated item. You can view the generated source with the following command, which also expands a few other things:

cargo rustc --profile=check -- -Z unstable-options --pretty=expanded > expanded.rs; rustfmt expanded.rs

(You might need to install rustfmt if you don't already have it.)

You can also debug classic !-macros by adding -Z trace-macros:

cargo rustc --profile=check -- -Z unstable-options --pretty=expanded -Z trace-macros > expanded.rs; rustfmt expanded.rs

See cargo expand for a more elaborate version of those commands.

Running with Valgrind

Valgrind is a tool to detect memory management bugs such as memory leaks.

You first need to install a debug build of Python, otherwise Valgrind won't produce usable results. In Ubuntu there's e.g. a python3-dbg package.

Activate an environment with the debug interpreter and recompile. If you're on Linux, use ldd with the name of your binary and check that you're linking e.g. libpython3.6dm.so.1.0 instead of libpython3.6m.so.1.0.

Download the suppressions file for cpython.

Run Valgrind with valgrind --suppressions=valgrind-python.supp ./my-command --with-options

Getting a stacktrace

The best start to investigate a crash such as an segmentation fault is a backtrace.

  • Link against a debug build of python as described in the previous chapter
  • Run gdb <my-binary>
  • Enter r to run
  • After the crash occurred, enter bt or bt full to print the stacktrace

Features Reference

PyO3 provides a number of Cargo features to customise functionality. This chapter of the guide provides detail on each of them.

By default, only the macros feature is enabled.

Features for extension module authors

extension-module

This feature is required when building a Python extension module using PyO3.

It tells PyO3's build script to skip linking against libpython.so on Unix platforms, where this must not be done.

See the building and distribution section for further detail.

abi3

This feature is used when building Python extension modules to create wheels which are compatible with multiple Python versions.

It restricts PyO3's API to a subset of the full Python API which is guaranteed by PEP 384 to be forwards-compatible with future Python versions.

See the building and distribution section for further detail.

abi3-py36 / abi3-py37 / abi3-py38 / abi3-py39

These features are an extension of the abi3 feature to specify the exact minimum Python version which the multiple-version-wheel will support.

See the building and distribution section for further detail.

Features for embedding Python in Rust

auto-initialize

This feature changes Python::with_gil and Python::acquire_gil to automatically initialize a Python interpreter (by calling prepare_freethreaded_python) if needed.

If you do not enable this feature, you should call pyo3::prepare_freethreaded_python() before attempting to call any other Python APIs.

Advanced Features

macros

This feature enables a dependency on the pyo3-macros crate, which provides the procedural macros portion of PyO3's API:

  • #[pymodule]
  • #[pyfunction]
  • #[pyclass]
  • #[pymethods]
  • #[pyproto]
  • #[derive(FromPyObject)]

It also provides the py_run! macro.

These macros require a number of dependencies which may not be needed by users who just need PyO3 for Python FFI. Disabling this feature enables faster builds for those users, as these dependencies will not be built if this feature is disabled.

This feature is enabled by default. To disable it, set default-features = false for the pyo3 entry in your Cargo.toml.

multiple-pymethods

This feature enables a dependency on inventory, which enables each #[pyclass] to have more than one #[pymethods] block.

Most users should only need a single #[pymethods] per #[pyclass]. In addition, not all platforms (e.g. Wasm) are supported by inventory. For this reason this feature is not enabled by default, meaning fewer dependencies and faster compilation for the majority of users.

See the #[pyclass] implementation details for more information.

nightly

The nightly feature needs the nightly Rust compiler. This allows PyO3 to use Rust's unstable specialization feature to apply the following optimizations:

  • FromPyObject for Vec and [T;N] can perform a memcpy when the object supports the Python buffer protocol.
  • ToBorrowedObject can skip a reference count increase when the provided object is a Python native type.

num-bigint

This feature adds a dependency on num-bigint and enables conversions into its BigInt and BigUint types.

num-complex

This feature adds a dependency on num-complex and enables conversions into its Complex type.

serde

The serde feature enables (de)serialization of Py objects via serde. This allows to use #[derive(Serialize, Deserialize) on structs that hold references to #[pyclass] instances


#![allow(unused)]

fn main() {
#[pyclass]
#[derive(Serialize, Deserialize)]
struct Permission {
    name: String
}

#[pyclass]
#[derive(Serialize, Deserialize)]
struct User {
    username: String,
    permissions: Vec<Py<Permission>>
}
}

Advanced topics

FFI

PyO3 exposes much of Python's C API through the ffi module.

The C API is naturally unsafe and requires you to manage reference counts, errors and specific invariants yourself. Please refer to the C API Reference Manual and The Rustonomicon before using any function from that API.

Memory Management

PyO3's "owned references" (&PyAny etc.) make PyO3 more ergonomic to use by ensuring that their lifetime can never be longer than the duration the Python GIL is held. This means that most of PyO3's API can assume the GIL is held. (If PyO3 could not assume this, every PyO3 API would need to take a Python GIL token to prove that the GIL is held.)

The caveat to these "owned references" is that Rust references do not normally convey ownership (they are always Copy, and cannot implement Drop). Whenever a PyO3 API returns an owned reference, PyO3 stores it internally, so that PyO3 can decrease the reference count just before PyO3 releases the GIL.

For most use cases this behaviour is invisible. Occasionally, however, users may need to clear memory usage sooner than PyO3 usually does. PyO3 exposes this functionality with the the GILPool struct. When a GILPool is dropped, all owned references created after the GILPool was created will be cleared.

The unsafe function Python::new_pool allows you to create a new GILPool. When doing this, you must be very careful to ensure that once the GILPool is dropped you do not retain access any owned references created after the GILPool was created.

Building and Distribution

This chapter of the guide goes into detail on how to build and distribute projects using PyO3. The way to achieve this is very different depending on whether the project is a Python module implemented in Rust, or a Rust binary embedding Python. For both types of project there are also common problems such as the Python version to build for and the linker arguments to use.

The material in this chapter is intended for users who have already read the PyO3 README. It covers in turn the choices that can be made for Python modules and for Rust binaries. There is also a section at the end about cross-compiling projects using PyO3.

There is an additional sub-chapter dedicated to supporting multiple Python versions.

Configuring the Python version

PyO3 uses a build script (backed by the pyo3-build-config crate) to determine the Python version and set the correct linker arguments. By default it will attempt to use the following in order:

  • Any active Python virtualenv.
  • The python executable (if it's a Python 3 interpreter).
  • The python3 executable.

You can override the Python interpreter by setting the PYO3_PYTHON environment variable, e.g. PYO3_PYTHON=python3.6, PYO3_PYTHON=/usr/bin/python3.9, or even a PyPy interpreter PYO3_PYTHON=pypy3.

Once the Python interpreter is located, pyo3-build-config executes it to query the information in the sysconfig module which is needed to configure the rest of the compilation.

To validate the configuration which PyO3 will use, you can run a compilation with the environment variable PYO3_PRINT_CONFIG=1 set. An example output of doing this is shown below:

$ PYO3_PRINT_CONFIG=1 cargo build
   Compiling pyo3 v0.14.1 (/home/david/dev/pyo3)
error: failed to run custom build command for `pyo3 v0.14.1 (/home/david/dev/pyo3)`

Caused by:
  process didn't exit successfully: `/home/david/dev/pyo3/target/debug/build/pyo3-7a8cf4fe22e959b7/build-script-build` (exit status: 101)
  --- stdout
  cargo:rerun-if-env-changed=PYO3_CROSS
  cargo:rerun-if-env-changed=PYO3_CROSS_LIB_DIR
  cargo:rerun-if-env-changed=PYO3_CROSS_PYTHON_VERSION
  cargo:rerun-if-env-changed=PYO3_PRINT_CONFIG

  -- PYO3_PRINT_CONFIG=1 is set, printing configuration and halting compile --
  implementation=CPython
  version=3.8
  shared=true
  abi3=false
  lib_name=python3.8
  lib_dir=/usr/lib
  executable=/usr/bin/python
  pointer_width=64
  build_flags=WITH_THREAD

Note: if you save the output config to a file, it is possible to manually override the contents and feed it back into PyO3 using the PYO3_CONFIG_FILE env var. For now, this is an advanced feature that should not be needed for most users. The format of the config file and its contents are deliberately unstable and undocumented. If you have a production use-case for this config file, please file an issue and help us stabilize it!

Building Python extension modules

Python extension modules need to be compiled differently depending on the OS (and architecture) that they are being compiled for. As well as multiple OSes (and architectures), there are also many different Python versions which are actively supported. Packages uploaded to PyPI usually want to upload prebuilt "wheels" covering many OS/arch/version combinations so that users on all these different platforms don't have to compile the package themselves. Package vendors can opt-in to the "abi3" limited Python API which allows their wheels to be used on multiple Python versions, reducing the number of wheels they need to compile, but restricts the functionality they can use.

There are many ways to go about this: it is possible to use cargo to build the extension module (along with some manual work, which varies with OS). The PyO3 ecosystem has two packaging tools, maturin and setuptools-rust, which abstract over the OS difference and also support building wheels for PyPI upload.

PyO3 has some Cargo features to configure projects for building Python extension modules:

  • The extension-module feature, which must be enabled when building Python extension modules.
  • The abi3 feature and its version-specific abi3-pyXY companions, which are used to opt-in to the limited Python API in order to support multiple Python versions in a single wheel.

This section describes each of these packaging tools before describiing how to build manually without them. It then proceeds with an explanation of the extension-module feature. Finally, there is a section describing PyO3's abi3 features.

Packaging tools

The PyO3 ecosystem has two main choices to abstract the process of developing Python extension modules:

  • maturin is a command-line tool to build, package and upload Python modules. It makes opinionated choices about project layout meaning it needs very little configuration. This makes it a great choice for users who are building a Python extension from scratch and don't need flexibility.
  • setuptools-rust is an add-on for setuptools which adds extra keyword arguments to the setup.py configuration file. It requires more configuration than maturin, however this gives additional flexibility for users adding Rust to an existing Python package that can't satisfy maturin's constraints.

Consult each project's documentation for full details on how to get started using them and how to upload wheels to PyPI.

There are also maturin-starter and setuptools-rust-starter examples in the PyO3 repository.

Manual builds

To build a PyO3-based Python extension manually, start by running cargo build as normal in a library project which uses PyO3's extension-module feature and has the cdylib crate type.

Once built, symlink (or copy) and rename the shared library from Cargo's target/ directory to your desired output directory:

  • on macOS, rename libyour_module.dylib to your_module.so.
  • on Windows, rename libyour_module.dll to your_module.pyd.
  • on Linux, rename libyour_module.so to your_module.so.

You can then open a Python shell in the output directory and you'll be able to run import your_module.

See, as an example, Bazel rules to build PyO3 on Linux at https://github.com/TheButlah/rules_pyo3.

macOS

On macOS, because the extension-module feature disables linking to libpython (see the next section), some additional linker arguments need to be set. maturin and setuptools-rust both pass these arguments for PyO3 automatically, but projects using manual builds will need to set these directly in order to support macOS.

The easiest way to set the correct linker arguments is to add a build.rs with the following content:

fn main() {
  pyo3_build_config::add_extension_module_link_args();
}

Remember to also add pyo3-build-config to the build-dependencies section in Cargo.toml.

An alternative to using pyo3-build-config is add the following to a cargo configuration file (e.g. .cargo/config.toml):

[target.x86_64-apple-darwin]
rustflags = [
  "-C", "link-arg=-undefined",
  "-C", "link-arg=dynamic_lookup",
]

[target.aarch64-apple-darwin]
rustflags = [
  "-C", "link-arg=-undefined",
  "-C", "link-arg=dynamic_lookup",
]

The extension-module feature

PyO3's extension-module feature is used to disable linking to libpython on unix targets.

This is necessary because by default PyO3 links to libpython. This makes binaries, tests, and examples "just work". However, Python extensions on unix must not link to libpython for manylinux compliance.

The downside of not linking to libpython is that binaries, tests, and examples (which usually embed Python) will fail to build. If you have an extension module as well as other outputs in a single project, you need to use optional Cargo features to disable the extension-module when you're not building the extension module. See the FAQ for an example workaround.

Py_LIMITED_API/abi3

By default, Python extension modules can only be used with the same Python version they were compiled against. For example, an extension module built for Python 3.5 can't be imported in Python 3.8. PEP 384 introduced the idea of the limited Python API, which would have a stable ABI enabling extension modules built with it to be used against multiple Python versions. This is also known as abi3.

The advantage of building extension modules using the limited Python API is that package vendors only need to build and distribute a single copy (for each OS / architecture), and users can install it on all Python versions from the minimum version and up. The downside of this is that PyO3 can't use optimizations which rely on being compiled against a known exact Python version. It's up to you to decide whether this matters for your extension module. It's also possible to design your extension module such that you can distribute abi3 wheels but allow users compiling from source to benefit from additional optimizations - see the support for multiple python versions section of this guide, in particular the #[cfg(Py_LIMITED_API)] flag.

There are three steps involved in making use of abi3 when building Python packages as wheels:

  1. Enable the abi3 feature in pyo3. This ensures pyo3 only calls Python C-API functions which are part of the stable API, and on Windows also ensures that the project links against the correct shared object (no special behavior is required on other platforms):
[dependencies]
pyo3 = { version = "0.14.3", features = ["abi3"] }
  1. Ensure that the built shared objects are correctly marked as abi3. This is accomplished by telling your build system that you're using the limited API. maturin >= 0.9.0 and setuptools-rust >= 0.11.4 support abi3 wheels. See the corresponding PRs for more.

  2. Ensure that the .whl is correctly marked as abi3. For projects using setuptools, this is accomplished by passing --py-limited-api=cp3x (where x is the minimum Python version supported by the wheel, e.g. --py-limited-api=cp35 for Python 3.5) to setup.py bdist_wheel.

Minimum Python version for abi3

Because a single abi3 wheel can be used with many different Python versions, PyO3 has feature flags abi3-py36, abi3-py37, abi-py38 etc. to set the minimum required Python version for your abi3 wheel. For example, if you set the abi3-py36 feature, your extension wheel can be used on all Python 3 versions from Python 3.6 and up. maturin and setuptools-rust will give the wheel a name like my-extension-1.0-cp36-abi3-manylinux2020_x86_64.whl.

As your extension module may be run with multiple different Python versions you may occasionally find you need to check the Python version at runtime to customize behavior. See the relevant section of this guide on supporting multiple Python versions at runtime.

PyO3 is only able to link your extension module to api3 version up to and including your host Python version. E.g., if you set abi3-py38 and try to compile the crate with a host of Python 3.6, the build will fail.

As an advanced feature, you can build PyO3 wheel without calling Python interpreter with the environment variable PYO3_NO_PYTHON set. On unix systems this works unconditionally; on Windows you must also set the RUSTFLAGS evironment variable to contain -L native=/path/to/python/libs so that the linker can find python3.lib.

Note: If you set more that one of these api version feature flags the highest version always wins. For example, with both abi3-py36 and abi3-py38 set, PyO3 would build a wheel which supports Python 3.8 and up.

Missing features

Due to limitations in the Python API, there are a few pyo3 features that do not work when compiling for abi3. These are:

  • #[pyo3(text_signature = "...")] does not work on classes until Python 3.10 or greater.
  • The dict and weakref options on classes are not supported until Python 3.9 or greater.
  • The buffer API is not supported.
  • Optimizations which rely on knowledge of the exact Python version compiled against.

Embedding Python in Rust

If you want to embed the Python interpreter inside a Rust program, there are two modes in which this can be done: dynamically and statically. We'll cover each of these modes in the following sections. Each of them affect how you must distribute your program. Instead of learning how to do this yourself, you might want to consider using a project like PyOxidizer to ship your application and all of its dependencies in a single file.

PyO3 automatically switches between the two linking modes depending on whether the Python distribution you have configured PyO3 to use (see above) contains a shared library or a static library. The static library is most often seen in Python distributions compiled from source without the --enable-shared configuration option. For example, this is the default for pyenv on macOS.

Dynamically embedding the Python interpreter

Embedding the Python interpreter dynamically is much easier than doing so statically. This is done by linking your program against a Python shared library (such as libpython.3.9.so on UNIX, or python39.dll on Windows). The implementation of the Python interpreter resides inside the shared library. This means that when the OS runs your Rust program it also needs to be able to find the Python shared library.

This mode of embedding works well for Rust tests which need access to the Python interpreter. It is also great for Rust software which is installed inside a Python virtualenv, because the virtualenv sets up appropriate environment variables to locate the correct Python shared library.

For distributing your program to non-technical users, you will have to consider including the Python shared library in your distribution as well as setting up wrapper scripts to set the right environment variables (such as LD_LIBRARY_PATH on UNIX, or PATH on Windows).

Note that PyPy cannot be embedded in Rust (or any other software). Support for this is tracked on the PyPy issue tracker.

Statically embedding the Python interpreter

Embedding the Python interpreter statically means including the contents of a Python static library directly inside your Rust binary. This means that to distribute your program you only need to ship your binary file: it contains the Python interpreter inside the binary!

On Windows static linking is almost never done, so Python distributions don't usually include a static library. The information below applies only to UNIX.

The Python static library is usually called libpython.a.

Static linking has a lot of complications, listed below. For these reasons PyO3 does not yet have first-class support for this embedding mode. See issue 416 on PyO3's Github for more information and to discuss any issues you encounter.

The auto-initialize feature is deliberately disabled when embedding the interpreter statically because this is often unintentionally done by new users to PyO3 running test programs. Trying out PyO3 is much easier using dynamic embedding.

The known complications are:

  • To import compiled extension modules (such as other Rust extension modules, or those written in C), your binary must have the correct linker flags set during compilation to export the original contents of libpython.a so that extensions can use them (e.g. -Wl,--export-dynamic).

  • The C compiler and flags which were used to create libpython.a must be compatible with your Rust compiler and flags, else you will experience compilation failures.

    Significantly different compiler versions may see errors like this:

    lto1: fatal error: bytecode stream in file 'rust-numpy/target/release/deps/libpyo3-6a7fb2ed970dbf26.rlib' generated with LTO version 6.0 instead of the expected 6.2
    

    Mismatching flags may lead to errors like this:

    /usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/9/../../../x86_64-linux-gnu/libpython3.9.a(zlibmodule.o): relocation R_X86_64_32 against `.data' can not be used when making a PIE object; recompile with -fPIE
    

If you encounter these or other complications when linking the interpreter statically, discuss them on issue 416 on PyO3's Github. It is hoped that eventually that discussion will contain enough information and solutions that PyO3 can offer first-class support for static embedding.

Cross Compiling

Thanks to Rust's great cross-compilation support, cross-compiling using PyO3 is relatively straightforward. To get started, you'll need a few pieces of software:

  • A toolchain for your target.
  • The appropriate options in your Cargo .config for the platform you're targeting and the toolchain you are using.
  • A Python interpreter that's already been compiled for your target.
  • A Python interpreter that is built for your host and available through the PATH or setting the PYO3_PYTHON variable.

After you've obtained the above, you can build a cross-compiled PyO3 module by using Cargo's --target flag. PyO3's build script will detect that you are attempting a cross-compile based on your host machine and the desired target.

When cross-compiling, PyO3's build script cannot execute the target Python interpreter to query the configuration, so there are a few additional environment variables you may need to set:

  • PYO3_CROSS: If present this variable forces PyO3 to configure as a cross-compilation.
  • PYO3_CROSS_LIB_DIR: This variable must be set to the directory containing the target's libpython DSO and the associated _sysconfigdata*.py file for Unix-like targets, or the Python DLL import libraries for the Windows target.
  • PYO3_CROSS_PYTHON_VERSION: Major and minor version (e.g. 3.9) of the target Python installation. This variable is only needed if PyO3 cannot determine the version to target from abi3-py3* features, or if there are multiple versions of Python present in PYO3_CROSS_LIB_DIR.

An example might look like the following (assuming your target's sysroot is at /home/pyo3/cross/sysroot and that your target is armv7):

export PYO3_CROSS_LIB_DIR="/home/pyo3/cross/sysroot/usr/lib"

cargo build --target armv7-unknown-linux-gnueabihf

If there are multiple python versions at the cross lib directory and you cannot set a more precise location to include both the libpython DSO and _sysconfigdata*.py files, you can set the required version:

export PYO3_CROSS_PYTHON_VERSION=3.8
export PYO3_CROSS_LIB_DIR="/home/pyo3/cross/sysroot/usr/lib"

cargo build --target armv7-unknown-linux-gnueabihf

Or another example with the same sys root but building for Windows:

export PYO3_CROSS_PYTHON_VERSION=3.9
export PYO3_CROSS_LIB_DIR="/home/pyo3/cross/sysroot/usr/lib"

cargo build --target x86_64-pc-windows-gnu

Any of the abi3-py3* features can be enabled instead of setting PYO3_CROSS_PYTHON_VERSION in the above examples.

The following resources may also be useful for cross-compiling:

Supporting multiple Python versions

PyO3 supports all actively-supported Python 3 and PyPy versions. As much as possible, this is done internally to PyO3 so that your crate's code does not need to adapt to the differences between each version. However, as Python features grow and change between versions, PyO3 cannot a completely identical API for every Python version. This may require you to add conditional compilation to your crate or runtime checks for the Python version.

This section of the guide first introduces the pyo3-build-config crate, which you can use as a build-dependency to add additional #[cfg] flags which allow you to support multiple Python versions at compile-time.

Second, we'll show how to check the Python version at runtime. This can be useful when building for multiple versions with the abi3 feature, where the Python API compiled against is not always the same as the one in use.

Conditional compilation for different Python versions

The pyo3-build-config exposes multiple #[cfg] flags which can be used to conditionally compile code for a given Python version. PyO3 itself depends on this crate, so by using it you can be sure that you are configured correctly for the Python version PyO3 is building against.

This allows us to write code like the following

#[cfg(Py_3_7)]
fn function_only_supported_on_python_3_7_and_up() { }

#[cfg(not(Py_3_8))]
fn function_only_supported_before_python_3_8() { }

#[cfg(not(Py_LIMITED_API))]
fn function_incompatible_with_abi3_feature() { }

The following sections first show how to add these #[cfg] flags to your build process, and then cover some common patterns flags in a little more detail.

To see a full reference of all the #[cfg] flags provided, see the pyo3-build-cfg docs.

Using pyo3-build-config

You can use the #[cfg] flags in just two steps:

  1. Add pyo3-build-config it to your crate's build dependencies in Cargo.toml:

    [build-dependencies]
    pyo3-build-config = "version = "0.14.3""
    
  2. Add a build.rs file to your crate with the following contents:

    fn main() {
        // If you have an existing build.rs file, just add this line to it.
        pyo3_build_config::use_pyo3_cfgs();
    }
    

After these steps you are ready to annotate your code!

Common usages of pyo3-build-cfg flags

The #[cfg] flags added by pyo3-build-cfg can be combined with all of Rust's logic in the #[cfg] attribute to create very precise conditional code generation. The following are some common patterns implemented using these flags:

#[cfg(Py_3_7)]

This #[cfg] marks code that will only be present on Python 3.7 and upwards. There are similar options Py_3_8, Py_3_9, Py_3_10 and so on for each minor version.

#[cfg(not(Py_3_7))]

This #[cfg] marks code that will only be present on Python versions before (but not including) Python 3.7.

#[cfg(not(Py_LIMITED_API))]

This #[cfg] marks code that is only available when building for the unlimited Python API (i.e. PyO3's abi3 feature is not enabled). This might be useful if you want to ship your extension module as an abi3 wheel and also allow users to compile it from source to make use of optimizations only possible with the unlimited API.

#[cfg(any(Py_3_9, not(Py_LIMITED_API)))]

This #[cfg] marks code which is available when running Python 3.9 or newer, or when using the unlimited API with an older Python version. Patterns like this are commonly seen on Python APIs which were added to the limited Python API in a specific minor version.

#[cfg(PyPy)]

This #[cfg] marks code which is running on PyPy.

Checking the Python version at runtime

When building with PyO3's abi3 feature, your extension module will be compiled against a specific minimum version of Python, but may be running on newer Python versions.

For example with PyO3's abi3-py38 feature, your extension will be compiled as if it were for Python 3.8. If you were using pyo3-build-config, #[cfg(Py_3_8)] would be present. Your user could freely install and run your abi3 extension on Python 3.9.

There's no way to detect your user doing that at compile time, so instead you need to fall back to runtime checks.

PyO3 provides the APIs Python::version() and Python::version_info() to query the running Python version. This allows you to do the following, for example:

if py.version_info() >= (3, 9) {
   // run this code only if Python 3.9 or up
}

The PyO3 Ecosystem

This portion of the guide is dedicated to crates which are external to the main PyO3 project and provide additional functionality you might find useful.

Because these projects evolve independently of the PyO3 repository the content of these articles may fall out of date over time; please file issues on the PyO3 Github to alert maintainers when this is the case.

Logging

It is desirable if both the Python and Rust parts of the application end up logging using the same configuration into the same place.

This section of the guide briefly discusses how to connect the two languages' logging ecosystems together. The recommended way for Python extension modules is to configure Rust's logger to send log messages to Python using the pyo3-log crate. For users who want to do the opposite and send Python log messages to Rust, see the note at the end of this guide.

Using pyo3-log to send Rust log messages to Python

The pyo3-log crate allows sending the messages from the Rust side to Python's logging system. This is mostly suitable for writing native extensions for Python programs.

Use pyo3_log::init to install the logger in its default configuration. It's also possible to tweak its configuration (mostly to tune its performance).


#![allow(unused)]
fn main() {
use log::info;
use pyo3::prelude::*;

#[pyfunction]
fn log_something() {
    // This will use the logger installed in `my_module` to send the `info`
    // message to the Python logging facilities.
    info!("Something!");
}

#[pymodule]
fn my_module(_py: Python<'_>, m: &PyModule) -> PyResult<()> {
    // A good place to install the Rust -> Python logger.
    pyo3_log::init();

    m.add_function(wrap_pyfunction!(log_something))?;
    Ok(())
}
}

Then it is up to the Python side to actually output the messages somewhere.

import logging
import my_module

FORMAT = '%(levelname)s %(name)s %(asctime)-15s %(filename)s:%(lineno)d %(message)s'
logging.basicConfig(format=FORMAT)
logging.getLogger().setLevel(logging.INFO)
my_module.log_something()

It is important to initialize the Python loggers first, before calling any Rust functions that may log. This limitation can be worked around if it is not possible to satisfy, read the documentation about caching.

The Python to Rust direction

To best of our knowledge nobody implemented the reverse direction yet, though it should be possible. If interested, the pyo3 community would be happy to provide guidance.

Async / Await

If you are working with a Python library that makes use of async functions or wish to provide Python bindings for an async Rust library, pyo3-asyncio likely has the tools you need. It provides conversions between async functions in both Python and Rust and was designed with first-class support for popular Rust runtimes such as tokio and async-std. In addition, all async Python code runs on the default asyncio event loop, so pyo3-asyncio should work just fine with existing Python libraries.

In the following sections, we'll give a general overview of pyo3-asyncio explaining how to call async Python functions with PyO3, how to call async Rust functions from Python, and how to configure your codebase to manage the runtimes of both.

Quickstart

Here are some examples to get you started right away! A more detailed breakdown of the concepts in these examples can be found in the following sections.

Rust Applications

Here we initialize the runtime, import Python's asyncio library and run the given future to completion using Python's default EventLoop and async-std. Inside the future, we convert asyncio sleep into a Rust future and await it.

# Cargo.toml dependencies
[dependencies]
pyo3 = { version = "0.14" }
pyo3-asyncio = { version = "0.14", features = ["attributes", "async-std-runtime"] }
async-std = "1.9"
//! main.rs

use pyo3::prelude::*;

#[pyo3_asyncio::async_std::main]
async fn main() -> PyResult<()> {
    let fut = Python::with_gil(|py| {
        let asyncio = py.import("asyncio")?;
        // convert asyncio.sleep into a Rust Future
        pyo3_asyncio::async_std::into_future(asyncio.call_method1("sleep", (1.into_py(py),))?)
    })?;

    fut.await?;

    Ok(())
}

The same application can be written to use tokio instead using the #[pyo3_asyncio::tokio::main] attribute.

# Cargo.toml dependencies
[dependencies]
pyo3 = { version = "0.14" }
pyo3-asyncio = { version = "0.14", features = ["attributes", "tokio-runtime"] }
tokio = "1.4"
//! main.rs

use pyo3::prelude::*;

#[pyo3_asyncio::tokio::main]
async fn main() -> PyResult<()> {
    let fut = Python::with_gil(|py| {
        let asyncio = py.import("asyncio")?;
        // convert asyncio.sleep into a Rust Future
        pyo3_asyncio::tokio::into_future(asyncio.call_method1("sleep", (1.into_py(py),))?)
    })?;

    fut.await?;

    Ok(())
}

More details on the usage of this library can be found in the API docs and the primer below.

PyO3 Native Rust Modules

PyO3 Asyncio can also be used to write native modules with async functions.

Add the [lib] section to Cargo.toml to make your library a cdylib that Python can import.

[lib]
name = "my_async_module"
crate-type = ["cdylib"]

Make your project depend on pyo3 with the extension-module feature enabled and select your pyo3-asyncio runtime:

For async-std:

[dependencies]
pyo3 = { version = "0.14", features = ["extension-module"] }
pyo3-asyncio = { version = "0.14", features = ["async-std-runtime"] }
async-std = "1.9"

For tokio:

[dependencies]
pyo3 = { version = "0.14", features = ["extension-module"] }
pyo3-asyncio = { version = "0.14", features = ["tokio-runtime"] }
tokio = "1.4"

Export an async function that makes use of async-std:


#![allow(unused)]
fn main() {
//! lib.rs

use pyo3::{prelude::*, wrap_pyfunction};

#[pyfunction]
fn rust_sleep(py: Python) -> PyResult<&PyAny> {
    pyo3_asyncio::async_std::future_into_py(py, async {
        async_std::task::sleep(std::time::Duration::from_secs(1)).await;
        Ok(Python::with_gil(|py| py.None()))
    })
}

#[pymodule]
fn my_async_module(py: Python, m: &PyModule) -> PyResult<()> {
    m.add_function(wrap_pyfunction!(rust_sleep, m)?)?;

    Ok(())
}

}

If you want to use tokio instead, here's what your module should look like:


#![allow(unused)]
fn main() {
//! lib.rs

use pyo3::{prelude::*, wrap_pyfunction};

#[pyfunction]
fn rust_sleep(py: Python) -> PyResult<&PyAny> {
    pyo3_asyncio::tokio::future_into_py(py, async {
        tokio::time::sleep(std::time::Duration::from_secs(1)).await;
        Ok(Python::with_gil(|py| py.None()))
    })
}

#[pymodule]
fn my_async_module(py: Python, m: &PyModule) -> PyResult<()> {
    m.add_function(wrap_pyfunction!(rust_sleep, m)?)?;
    Ok(())
}
}

You can build your module with maturin (see the Using Rust in Python section in the PyO3 guide for setup instructions). After that you should be able to run the Python REPL to try it out.

maturin develop && python3
🔗 Found pyo3 bindings
🐍 Found CPython 3.8 at python3
    Finished dev [unoptimized + debuginfo] target(s) in 0.04s
Python 3.8.5 (default, Jan 27 2021, 15:41:15)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import asyncio
>>>
>>> from my_async_module import rust_sleep
>>>
>>> async def main():
>>>     await rust_sleep()
>>>
>>> # should sleep for 1s
>>> asyncio.run(main())
>>>

Awaiting an Async Python Function in Rust

Let's take a look at a dead simple async Python function:

# Sleep for 1 second
async def py_sleep():
    await asyncio.sleep(1)

Async functions in Python are simply functions that return a coroutine object. For our purposes, we really don't need to know much about these coroutine objects. The key factor here is that calling an async function is just like calling a regular function, the only difference is that we have to do something special with the object that it returns.

Normally in Python, that something special is the await keyword, but in order to await this coroutine in Rust, we first need to convert it into Rust's version of a coroutine: a Future. That's where pyo3-asyncio comes in. pyo3_asyncio::into_future performs this conversion for us.

The following example uses into_future to call the py_sleep function shown above and then await the coroutine object returned from the call:

use pyo3::prelude::*;

#[pyo3_asyncio::tokio::main]
async fn main() -> PyResult<()> {
    let future = Python::with_gil(|py| -> PyResult<_> {
        // import the module containing the py_sleep function
        let example = py.import("example")?;

        // calling the py_sleep method like a normal function
        // returns a coroutine
        let coroutine = example.call_method0("py_sleep")?;

        // convert the coroutine into a Rust future using the
        // tokio runtime
        pyo3_asyncio::tokio::into_future(coroutine)
    })?;

    // await the future
    future.await?;

    Ok(())
}

Alternatively, the below example shows how to write a #[pyfunction] which uses into_future to receive and await a coroutine argument:


#![allow(unused)]
fn main() {
#[pyfunction]
fn await_coro(coro: &PyAny) -> PyResult<()> {
    // convert the coroutine into a Rust future using the
    // async_std runtime
    let f = pyo3_asyncio::async_std::into_future(coro)?;

    pyo3_asyncio::async_std::run_until_complete(coro.py(), async move {
        // await the future
        f.await?;
        Ok(())
    })
}
}

This could be called from Python as:

import asyncio

async def py_sleep():
    asyncio.sleep(1)

await_coro(py_sleep())

If for you wanted to pass a callable function to the #[pyfunction] instead, (i.e. the last line becomes await_coro(py_sleep)), then the above example needs to be tweaked to first call the callable to get the coroutine:


#![allow(unused)]
fn main() {
#[pyfunction]
fn await_coro(callable: &PyAny) -> PyResult<()> {
    // get the coroutine by calling the callable
    let coro = callable.call0()?;

    // convert the coroutine into a Rust future using the
    // async_std runtime
    let f = pyo3_asyncio::async_std::into_future(coro)?;

    pyo3_asyncio::async_std::run_until_complete(coro.py(), async move {
        // await the future
        f.await?;
        Ok(())
    })
}
}

This can be particularly helpful where you need to repeatedly create and await a coroutine. Trying to await the same coroutine multiple times will raise an error:

RuntimeError: cannot reuse already awaited coroutine

If you're interested in learning more about coroutines and awaitables in general, check out the Python 3 asyncio docs for more information.

Awaiting a Rust Future in Python

Here we have the same async function as before written in Rust using the async-std runtime:


#![allow(unused)]
fn main() {
/// Sleep for 1 second
async fn rust_sleep() {
    async_std::task::sleep(std::time::Duration::from_secs(1)).await;
}
}

Similar to Python, Rust's async functions also return a special object called a Future:


#![allow(unused)]
fn main() {
let future = rust_sleep();
}

We can convert this Future object into Python to make it awaitable. This tells Python that you can use the await keyword with it. In order to do this, we'll call pyo3_asyncio::async_std::future_into_py:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

async fn rust_sleep() {
    async_std::task::sleep(std::time::Duration::from_secs(1)).await;
}

#[pyfunction]
fn call_rust_sleep(py: Python) -> PyResult<&PyAny> {
    pyo3_asyncio::async_std::future_into_py(py, async move {
        rust_sleep().await;
        Ok(Python::with_gil(|py| py.None()))
    })
}
}

In Python, we can call this pyo3 function just like any other async function:

from example import call_rust_sleep

async def rust_sleep():
    await call_rust_sleep()

Managing Event Loops

Python's event loop requires some special treatment, especially regarding the main thread. Some of Python's asyncio features, like proper signal handling, require control over the main thread, which doesn't always play well with Rust.

Luckily, Rust's event loops are pretty flexible and don't need control over the main thread, so in pyo3-asyncio, we decided the best way to handle Rust/Python interop was to just surrender the main thread to Python and run Rust's event loops in the background. Unfortunately, since most event loop implementations prefer control over the main thread, this can still make some things awkward.

PyO3 Asyncio Initialization

Because Python needs to control the main thread, we can't use the convenient proc macros from Rust runtimes to handle the main function or #[test] functions. Instead, the initialization for PyO3 has to be done from the main function and the main thread must block on pyo3_asyncio::run_forever or pyo3_asyncio::async_std::run_until_complete.

Because we have to block on one of those functions, we can't use #[async_std::main] or #[tokio::main] since it's not a good idea to make long blocking calls during an async function.

Internally, these #[main] proc macros are expanded to something like this:

fn main() {
    // your async main fn
    async fn _main_impl() { /* ... */ }
    Runtime::new().block_on(_main_impl());
}

Making a long blocking call inside the Future that's being driven by block_on prevents that thread from doing anything else and can spell trouble for some runtimes (also this will actually deadlock a single-threaded runtime!). Many runtimes have some sort of spawn_blocking mechanism that can avoid this problem, but again that's not something we can use here since we need it to block on the main thread.

For this reason, pyo3-asyncio provides its own set of proc macros to provide you with this initialization. These macros are intended to mirror the initialization of async-std and tokio while also satisfying the Python runtime's needs.

Here's a full example of PyO3 initialization with the async-std runtime:

use pyo3::prelude::*;

#[pyo3_asyncio::async_std::main]
async fn main() -> PyResult<()> {
    // PyO3 is initialized - Ready to go

    let fut = Python::with_gil(|py| -> PyResult<_> {
        let asyncio = py.import("asyncio")?;

        // convert asyncio.sleep into a Rust Future
        pyo3_asyncio::async_std::into_future(
            asyncio.call_method1("sleep", (1.into_py(py),))?
        )
    })?;

    fut.await?;

    Ok(())
}

A Note About asyncio.run

In Python 3.7+, the recommended way to run a top-level coroutine with asyncio is with asyncio.run. In v0.13 we recommended against using this function due to initialization issues, but in v0.14 it's perfectly valid to use this function... with a caveat.

Since our Rust <--> Python conversions require a reference to the Python event loop, this poses a problem. Imagine we have a PyO3 Asyncio module that defines a rust_sleep function like in previous examples. You might rightfully assume that you can call pass this directly into asyncio.run like this:

import asyncio

from my_async_module import rust_sleep

asyncio.run(rust_sleep())

You might be surprised to find out that this throws an error:

Traceback (most recent call last):
  File "example.py", line 5, in <module>
    asyncio.run(rust_sleep())
RuntimeError: no running event loop

What's happening here is that we are calling rust_sleep before the future is actually running on the event loop created by asyncio.run. This is counter-intuitive, but expected behaviour, and unfortunately there doesn't seem to be a good way of solving this problem within PyO3 Asyncio itself.

However, we can make this example work with a simple workaround:

import asyncio

from my_async_module import rust_sleep

# Calling main will just construct the coroutine that later calls rust_sleep.
# - This ensures that rust_sleep will be called when the event loop is running,
#   not before.
async def main():
    await rust_sleep()

# Run the main() coroutine at the top-level instead
asyncio.run(main())

Non-standard Python Event Loops

Python allows you to use alternatives to the default asyncio event loop. One popular alternative is uvloop. In v0.13 using non-standard event loops was a bit of an ordeal, but in v0.14 it's trivial.

Using uvloop in a PyO3 Asyncio Native Extensions

# Cargo.toml

[lib]
name = "my_async_module"
crate-type = ["cdylib"]

[dependencies]
pyo3 = { version = "0.14", features = ["extension-module"] }
pyo3-asyncio = { version = "0.14", features = ["tokio-runtime"] }
async-std = "1.9"
tokio = "1.4"

#![allow(unused)]
fn main() {
//! lib.rs

use pyo3::{prelude::*, wrap_pyfunction};

#[pyfunction]
fn rust_sleep(py: Python) -> PyResult<&PyAny> {
    pyo3_asyncio::tokio::future_into_py(py, async {
        tokio::time::sleep(std::time::Duration::from_secs(1)).await;
        Ok(Python::with_gil(|py| py.None()))
    })
}

#[pymodule]
fn my_async_module(_py: Python, m: &PyModule) -> PyResult<()> {
    m.add_function(wrap_pyfunction!(rust_sleep, m)?)?;

    Ok(())
}
}
$ maturin develop && python3
🔗 Found pyo3 bindings
🐍 Found CPython 3.8 at python3
    Finished dev [unoptimized + debuginfo] target(s) in 0.04s
Python 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import asyncio
>>> import uvloop
>>>
>>> import my_async_module
>>>
>>> uvloop.install()
>>>
>>> async def main():
...     await my_async_module.rust_sleep()
...
>>> asyncio.run(main())
>>>

Using uvloop in Rust Applications

Using uvloop in Rust applications is a bit trickier, but it's still possible with relatively few modifications.

Unfortunately, we can't make use of the #[pyo3_asyncio::<runtime>::main] attribute with non-standard event loops. This is because the #[pyo3_asyncio::<runtime>::main] proc macro has to interact with the Python event loop before we can install the uvloop policy.

[dependencies]
async-std = "1.9"
pyo3 = "0.14"
pyo3-asyncio = { version = "0.14", features = ["async-std-runtime"] }
//! main.rs

use pyo3::{prelude::*, types::PyType};

fn main() -> PyResult<()> {
    pyo3::prepare_freethreaded_python();

    Python::with_gil(|py| {
        let uvloop = py.import("uvloop")?;
        uvloop.call_method0("install")?;

        // store a reference for the assertion
        let uvloop = PyObject::from(uvloop);

        pyo3_asyncio::async_std::run(py, async move {
            // verify that we are on a uvloop.Loop
            Python::with_gil(|py| -> PyResult<()> {
                assert!(uvloop
                    .as_ref(py)
                    .getattr("Loop")?
                    .downcast::<PyType>()
                    .unwrap()
                    .is_instance(pyo3_asyncio::async_std::get_current_loop(py)?)?);
                Ok(())
            })?;

            async_std::task::sleep(std::time::Duration::from_secs(1)).await;

            Ok(())
        })
    })
}

Additional Information

  • Managing event loop references can be tricky with pyo3-asyncio. See Event Loop References in the API docs to get a better intuition for how event loop references are managed in this library.
  • Testing pyo3-asyncio libraries and applications requires a custom test harness since Python requires control over the main thread. You can find a testing guide in the API docs for the testing module

Frequently Asked Questions / Troubleshooting

I'm experiencing deadlocks using PyO3 with lazy_static or once_cell!

lazy_static and once_cell::sync both use locks to ensure that initialization is performed only by a single thread. Because the Python GIL is an additional lock this can lead to deadlocks in the following way:

  1. A thread (thread A) which has acquired the Python GIL starts initialization of a lazy_static value.
  2. The initialization code calls some Python API which temporarily releases the GIL e.g. Python::import.
  3. Another thread (thread B) acquires the Python GIL and attempts to access the same lazy_static value.
  4. Thread B is blocked, because it waits for lazy_static's initialization to lock to release.
  5. Thread A is blocked, because it waits to re-aquire the GIL which thread B still holds.
  6. Deadlock.

PyO3 provides a struct GILOnceCell which works equivalently to OnceCell but relies solely on the Python GIL for thread safety. This means it can be used in place of lazy_static or once_cell where you are experiencing the deadlock described above. See the documentation for GILOnceCell for an example how to use it.

I can't run cargo test: I'm having linker issues like "Symbol not found" or "Undefined reference to _PyExc_SystemError"!

Currently, #340 causes cargo test to fail with linking errors when the extension-module feature is activated. For now you can work around this by making the extension-module feature optional and running the tests with cargo test --no-default-features:

[dependencies.pyo3]
version = "version = "0.14.3""

[features]
extension-module = ["pyo3/extension-module"]
default = ["extension-module"]

I can't run cargo test: my crate cannot be found for tests in tests/ directory!

The Rust book suggests to put integration tests inside a tests/ directory.

For a PyO3 extension-module project where the crate-type is set to "cdylib" in your Cargo.toml, the compiler won't be able to find your crate and will display errors such as E0432 or E0463:

error[E0432]: unresolved import `my_crate`
 --> tests/test_my_crate.rs:1:5
  |
1 | use my_crate;
  |     ^^^^^^^^^^^^ no external crate `my_crate`

The best solution is to make your crate types include both rlib and cdylib:

# Cargo.toml
[lib]
crate-type = ["cdylib", "rlib"]

Ctrl-C doesn't do anything while my Rust code is executing!

This is because Ctrl-C raises a SIGINT signal, which is handled by the calling Python process by simply setting a flag to action upon later. This flag isn't checked while Rust code called from Python is executing, only once control returns to the Python interpreter.

You can give the Python interpreter a chance to process the signal properly by calling Python::check_signals. It's good practice to call this function regularly if you have a long-running Rust function so that your users can cancel it.

#[pyo3(get)] clones my field!

You may have a nested struct similar to this:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
#[derive(Clone)]
struct Inner { /* fields omitted */ }

#[pyclass]
struct Outer {
    #[pyo3(get)]
    inner: Inner,
}

#[pymethods]
impl Outer {
    #[new]
    fn __new__() -> Self {
        Self { inner: Inner {} }
    }
}
}

When Python code accesses Outer's field, PyO3 will return a new object on every access (note that their addresses are different):

outer = Outer()

a = outer.inner
b = outer.inner

assert a is b, f"a: {a}\nb: {b}"
AssertionError: a: <builtins.Inner object at 0x00000238FFB9C7B0>
b: <builtins.Inner object at 0x00000238FFB9C830>

This can be especially confusing if the field is mutable, as getting the field and then mutating it won't persist - you'll just get a fresh clone of the original on the next access. Unfortunately Python and Rust don't agree about ownership - if PyO3 gave out references to (possibly) temporary Rust objects to Python code, Python code could then keep that reference alive indefinitely. Therefore returning Rust objects requires cloning.

If you don't want that cloning to happen, a workaround is to allocate the field on the Python heap and store a reference to that, by using Py<...>:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
#[derive(Clone)]
struct Inner { /* fields omitted */ }

#[pyclass]
struct Outer {
    #[pyo3(get)]
    inner: Py<Inner>,
}

#[pymethods]
impl Outer {
    #[new]
    fn __new__(py: Python) -> PyResult<Self> {
        Ok(Self {
            inner: Py::new(py, Inner {})?,
        })
    }
}
}

This time a and b are the same object:

outer = Outer()

a = outer.inner
b = outer.inner

assert a is b, f"a: {a}\nb: {b}"
print(f"a: {a}\nb: {b}")
a: <builtins.Inner object at 0x0000020044FCC670>
b: <builtins.Inner object at 0x0000020044FCC670>

The downside to this approach is that any Rust code working on the Outer struct now has to acquire the GIL to do anything with its field.

Migrating from older PyO3 versions

This guide can help you upgrade code through breaking changes from one PyO3 version to the next. For a detailed list of all changes, see the CHANGELOG.

from 0.13.* to 0.14

auto-initialize feature is now opt-in

For projects embedding Python in Rust, PyO3 no longer automatically initalizes a Python interpreter on the first call to Python::with_gil (or Python::acquire_gil) unless the auto-initalize feature is enabled.

New multiple-pymethods feature

#[pymethods] have been reworked with a simpler default implementation which removes the dependency on the inventory crate. This reduces dependencies and compile times for the majority of users.

The limitation of the new default implementation is that it cannot support multiple #[pymethods] blocks for the same #[pyclass]. If you need this functionality, you must enable the multiple-pymethods feature which will switch #[pymethods] to the inventory-based implementation.

Deprecated #[pyproto] methods

Some protocol (aka __dunder__) methods such as __bytes__ and __format__ have been possible to implement two ways in PyO3 for some time: via a #[pyproto] (e.g. PyBasicProtocol for the methods listed here), or by writing them directly in #[pymethods]. This is only true for a handful of the #[pyproto] methods (for technical reasons to do with the way PyO3 currently interacts with the Python C-API).

In the interest of having onle one way to do things, the #[pyproto] forms of these methods have been deprecated.

To migrate just move the affected methods from a #[pyproto] to a #[pymethods] block.

Before:

use pyo3::prelude::*;
use pyo3::class::basic::PyBasicProtocol;

#[pyclass]
struct MyClass { }

#[pyproto]
impl PyBasicProtocol for MyClass {
    fn __bytes__(&self) -> &'static [u8] {
        b"hello, world"
    }
}

After:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

#[pyclass]
struct MyClass { }

#[pymethods]
impl MyClass {
    fn __bytes__(&self) -> &'static [u8] {
        b"hello, world"
    }
}
}

from 0.12.* to 0.13

Minimum Rust version increased to Rust 1.45

PyO3 0.13 makes use of new Rust language features stabilised between Rust 1.40 and Rust 1.45. If you are using a Rust compiler older than Rust 1.45, you will need to update your toolchain to be able to continue using PyO3.

Runtime changes to support the CPython limited API

In PyO3 0.13 support was added for compiling against the CPython limited API. This had a number of implications for all PyO3 users, described here.

The largest of these is that all types created from PyO3 are what CPython calls "heap" types. The specific implications of this are:

  • If you wish to subclass one of these types from Rust you must mark it #[pyclass(subclass)], as you would if you wished to allow subclassing it from Python code.
  • Type objects are now mutable - Python code can set attributes on them.
  • __module__ on types without #[pyclass(module="mymodule")] no longer returns builtins, it now raises AttributeError.

from 0.11.* to 0.12

PyErr has been reworked

In PyO3 0.12 the PyErr type has been re-implemented to be significantly more compatible with the standard Rust error handling ecosystem. Specificially PyErr now implements Error + Send + Sync, which are the standard traits used for error types.

While this has necessitated the removal of a number of APIs, the resulting PyErr type should now be much more easier to work with. The following sections list the changes in detail and how to migrate to the new APIs.

PyErr::new and PyErr::from_type now require Send + Sync for their argument

For most uses no change will be needed. If you are trying to construct PyErr from a value that is not Send + Sync, you will need to first create the Python object and then use PyErr::from_instance.

Similarly, any types which implemented PyErrArguments will now need to be Send + Sync.

PyErr's contents are now private

It is no longer possible to access the fields .ptype, .pvalue and .ptraceback of a PyErr. You should instead now use the new methods PyErr::ptype(), PyErr::pvalue() and PyErr::ptraceback().

PyErrValue and PyErr::from_value have been removed

As these were part the internals of PyErr which have been reworked, these APIs no longer exist.

If you used this API, it is recommended to use PyException::new_err (see the section on Exception types).

Into<PyResult<T>> for PyErr has been removed

This implementation was redundant. Just construct the Result::Err variant directly.

Before:

let result: PyResult<()> = PyErr::new::<TypeError, _>("error message").into();

After (also using the new reworked exception types; see the following section):


#![allow(unused)]
fn main() {
use pyo3::{PyErr, PyResult, exceptions::PyTypeError};
let result: PyResult<()> = Err(PyTypeError::new_err("error message"));
}

Exception types have been reworked

Previously exception types were zero-sized marker types purely used to construct PyErr. In PyO3 0.12, these types have been replaced with full definitions and are usable in the same way as PyAny, PyDict etc. This makes it possible to interact with Python exception objects.

The new types also have names starting with the "Py" prefix. For example, before:

let err: PyErr = TypeError::py_err("error message");

After:


#![allow(unused)]
fn main() {
use pyo3::{PyErr, PyResult, Python, type_object::PyTypeObject};
use pyo3::exceptions::{PyBaseException, PyTypeError};
Python::with_gil(|py| -> PyResult<()> {
let err: PyErr = PyTypeError::new_err("error message");

// Uses Display for PyErr, new for PyO3 0.12
assert_eq!(err.to_string(), "TypeError: error message");

// Now possible to interact with exception instances, new for PyO3 0.12
let instance: &PyBaseException = err.instance(py);
assert_eq!(instance.getattr("__class__")?, PyTypeError::type_object(py).as_ref());
Ok(())
}).unwrap();
}

FromPy has been removed

To simplify the PyO3 conversion traits, the FromPy trait has been removed. Previously there were two ways to define the to-Python conversion for a type: FromPy<T> for PyObject and IntoPy<PyObject> for T.

Now there is only one way to define the conversion, IntoPy, so downstream crates may need to adjust accordingly.

Before:

use pyo3::prelude::*;
struct MyPyObjectWrapper(PyObject);

impl FromPy<MyPyObjectWrapper> for PyObject {
    fn from_py(other: MyPyObjectWrapper, _py: Python) -> Self {
        other.0
    }
}

After


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
struct MyPyObjectWrapper(PyObject);

impl IntoPy<PyObject> for MyPyObjectWrapper {
    fn into_py(self, _py: Python) -> PyObject {
        self.0
    }
}
}

Similarly, code which was using the FromPy trait can be trivially rewritten to use IntoPy.

Before:

use pyo3::prelude::*;
Python::with_gil(|py| {
let obj = PyObject::from_py(1.234, py);
})

After:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
Python::with_gil(|py| {
let obj: PyObject = 1.234.into_py(py);
})
}

PyObject is now a type alias of Py<PyAny>

This should change very little from a usage perspective. If you implemented traits for both PyObject and Py<T>, you may find you can just remove the PyObject implementation.

AsPyRef has been removed

As PyObject has been changed to be just a type alias, the only remaining implementor of AsPyRef was Py<T>. This removed the need for a trait, so the AsPyRef::as_ref method has been moved to Py::as_ref.

This should require no code changes except removing use pyo3::AsPyRef for code which did not use pyo3::prelude::*.

Before:

use pyo3::{AsPyRef, Py, types::PyList};
pyo3::Python::with_gil(|py| {
let list_py: Py<PyList> = PyList::empty(py).into();
let list_ref: &PyList = list_py.as_ref(py);
})

After:


#![allow(unused)]
fn main() {
use pyo3::{Py, types::PyList};
pyo3::Python::with_gil(|py| {
let list_py: Py<PyList> = PyList::empty(py).into();
let list_ref: &PyList = list_py.as_ref(py);
})
}

from 0.10.* to 0.11

Stable Rust

PyO3 now supports the stable Rust toolchain. The minimum required version is 1.39.0.

#[pyclass] structs must now be Send or unsendable

Because #[pyclass] structs can be sent between threads by the Python interpreter, they must implement Send or declared as unsendable (by #[pyclass(unsendable)]). Note that unsendable is added in PyO3 0.11.1 and Send is always required in PyO3 0.11.0.

This may "break" some code which previously was accepted, even though it could be unsound. There can be two fixes:

  1. If you think that your #[pyclass] actually must be Sendable, then let's implement Send. A common, safer way is using thread-safe types. E.g., Arc instead of Rc, Mutex instead of RefCell, and Box<dyn Send + T> instead of Box<dyn T>.

    Before:

    
    #![allow(unused)]
    fn main() {
    use pyo3::prelude::*;
    use std::rc::Rc;
    use std::cell::RefCell;
    
    #[pyclass]
    struct NotThreadSafe {
        shared_bools: Rc<RefCell<Vec<bool>>>,
        closure: Box<dyn Fn()>
    }
    }
    

    After:

    
    #![allow(unused)]
    fn main() {
    use pyo3::prelude::*;
    use std::sync::{Arc, Mutex};
    
    #[pyclass]
    struct ThreadSafe {
        shared_bools: Arc<Mutex<Vec<bool>>>,
        closure: Box<dyn Fn() + Send>
    }
    }
    

    In situations where you cannot change your #[pyclass] to automatically implement Send (e.g., when it contains a raw pointer), you can use unsafe impl Send. In such cases, care should be taken to ensure the struct is actually thread safe. See the Rustnomicon for more.

  2. If you think that your #[pyclass] should not be accessed by another thread, you can use unsendable flag. A class marked with unsendable panics when accessed by another thread, making it thread-safe to expose an unsendable object to the Python interpreter.

    Before:

    
    #![allow(unused)]
    fn main() {
    use pyo3::prelude::*;
    
    #[pyclass]
    struct Unsendable {
        pointers: Vec<*mut std::os::raw::c_char>,
    }
    }
    

    After:

    
    #![allow(unused)]
    fn main() {
    use pyo3::prelude::*;
    
    #[pyclass(unsendable)]
    struct Unsendable {
        pointers: Vec<*mut std::os::raw::c_char>,
    }
    }
    

All PyObject and Py<T> methods now take Python as an argument

Previously, a few methods such as Object::get_refcnt did not take Python as an argument (to ensure that the Python GIL was held by the current thread). Technically, this was not sound. To migrate, just pass a py argument to any calls to these methods.

Before:


#![allow(unused)]
fn main() {
pyo3::Python::with_gil(|py| {
py.None().get_refcnt();
})
}

After:


#![allow(unused)]
fn main() {
pyo3::Python::with_gil(|py| {
py.None().get_refcnt(py);
})
}

from 0.9.* to 0.10

ObjectProtocol is removed

All methods are moved to PyAny. And since now all native types (e.g., PyList) implements Deref<Target=PyAny>, all you need to do is remove ObjectProtocol from your code. Or if you use ObjectProtocol by use pyo3::prelude::*, you have to do nothing.

Before:


#![allow(unused)]
fn main() {
use pyo3::ObjectProtocol;

pyo3::Python::with_gil(|py| {
let obj = py.eval("lambda: 'Hi :)'", None, None).unwrap();
let hi: &pyo3::types::PyString = obj.call0().unwrap().downcast().unwrap();
assert_eq!(hi.len().unwrap(), 5);
})
}

After:


#![allow(unused)]
fn main() {
pyo3::Python::with_gil(|py| {
let obj = py.eval("lambda: 'Hi :)'", None, None).unwrap();
let hi: &pyo3::types::PyString = obj.call0().unwrap().downcast().unwrap();
assert_eq!(hi.len().unwrap(), 5);
})
}

No #![feature(specialization)] in user code

While PyO3 itself still requires specialization and nightly Rust, now you don't have to use #![feature(specialization)] in your crate.

from 0.8.* to 0.9

#[new] interface

PyRawObject is now removed and our syntax for constructors has changed.

Before:


#![allow(unused)]
fn main() {
#[pyclass]
struct MyClass {}

#[pymethods]
impl MyClass {
   #[new]
   fn new(obj: &PyRawObject) {
       obj.init(MyClass { })
   }
}
}

After:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {}

#[pymethods]
impl MyClass {
   #[new]
   fn new() -> Self {
       MyClass {}
   }
}
}

Basically you can return Self or Result<Self> directly. For more, see the constructor section of this guide.

PyCell

PyO3 0.9 introduces PyCell, which is a RefCell-like object wrapper for ensuring Rust's rules regarding aliasing of references are upheld. For more detail, see the Rust Book's section on Rust's rules of references

For #[pymethods] or #[pyfunction]s, your existing code should continue to work without any change. Python exceptions will automatically be raised when your functions are used in a way which breaks Rust's rules of references.

Here is an example.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct Names {
    names: Vec<String>
}

#[pymethods]
impl Names {
    #[new]
    fn new() -> Self {
        Names { names: vec![] }
    }
    fn merge(&mut self, other: &mut Names) {
        self.names.append(&mut other.names)
    }
}
Python::with_gil(|py| {
    let names = PyCell::new(py, Names::new()).unwrap();
    pyo3::py_run!(py, names, r"
    try:
       names.merge(names)
       assert False, 'Unreachable'
    except RuntimeError as e:
       assert str(e) == 'Already borrowed'
    ");
})
}

Names has a merge method, which takes &mut self and another argument of type &mut Self. Given this #[pyclass], calling names.merge(names) in Python raises a PyBorrowMutError exception, since it requires two mutable borrows of names.

However, for #[pyproto] and some functions, you need to manually fix the code.

Object creation

In 0.8 object creation was done with PyRef::new and PyRefMut::new. In 0.9 these have both been removed. To upgrade code, please use PyCell::new instead. If you need PyRef or PyRefMut, just call .borrow() or .borrow_mut() on the newly-created PyCell.

Before:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {}
Python::with_gil(|py| {
let obj_ref = PyRef::new(py, MyClass {}).unwrap();
})
}

After:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
#[pyclass]
struct MyClass {}
Python::with_gil(|py| {
let obj = PyCell::new(py, MyClass {}).unwrap();
let obj_ref = obj.borrow();
})
}

Object extraction

For PyClass types T, &T and &mut T no longer have FromPyObject implementations. Instead you should extract PyRef<T> or PyRefMut<T>, respectively. If T implements Clone, you can extract T itself. In addition, you can also extract &PyCell<T>, though you rarely need it.

Before:

let obj: &PyAny = create_obj();
let obj_ref: &MyClass = obj.extract().unwrap();
let obj_ref_mut: &mut MyClass = obj.extract().unwrap();

After:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::IntoPyDict;
#[pyclass] #[derive(Clone)] struct MyClass {}
#[pymethods] impl MyClass { #[new]fn new() -> Self { MyClass {} }}
Python::with_gil(|py| {
let typeobj = py.get_type::<MyClass>();
let d = [("c", typeobj)].into_py_dict(py);
let create_obj = || py.eval("c()", None, Some(d)).unwrap();
let obj: &PyAny = create_obj();
let obj_cell: &PyCell<MyClass> = obj.extract().unwrap();
let obj_cloned: MyClass = obj.extract().unwrap(); // extracted by cloning the object
{
    let obj_ref: PyRef<MyClass> = obj.extract().unwrap();
    // we need to drop obj_ref before we can extract a PyRefMut due to Rust's rules of references
}
let obj_ref_mut: PyRefMut<MyClass> = obj.extract().unwrap();
})
}

#[pyproto]

Most of the arguments to methods in #[pyproto] impls require a FromPyObject implementation. So if your protocol methods take &T or &mut T (where T: PyClass), please use PyRef or PyRefMut instead.

Before:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::class::PySequenceProtocol;
#[pyclass]
struct ByteSequence {
    elements: Vec<u8>,
}
#[pyproto]
impl PySequenceProtocol for ByteSequence {
    fn __concat__(&self, other: &Self) -> PyResult<Self> {
        let mut elements = self.elements.clone();
        elements.extend_from_slice(&other.elements);
        Ok(Self { elements })
    }
}
}

After:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::class::PySequenceProtocol;
#[pyclass]
struct ByteSequence {
    elements: Vec<u8>,
}
#[pyproto]
impl PySequenceProtocol for ByteSequence {
    fn __concat__(&self, other: PyRef<'p, Self>) -> PyResult<Self> {
        let mut elements = self.elements.clone();
        elements.extend_from_slice(&other.elements);
        Ok(Self { elements })
    }
}
}

PyO3 and rust-cpython

PyO3 began as fork of rust-cpython when rust-cpython wasn't maintained. Over time PyO3 has become fundamentally different from rust-cpython.

Macros

While rust-cpython has a macro_rules! based dsl for declaring modules and classes, PyO3 uses proc macros. PyO3 also doesn't change your struct and functions so you can still use them as normal Rust functions.

rust-cpython

py_class!(class MyClass |py| {
    data number: i32;
    def __new__(_cls, arg: i32) -> PyResult<MyClass> {
        MyClass::create_instance(py, arg)
    }
    def half(&self) -> PyResult<i32> {
        Ok(self.number(py) / 2)
    }
});

pyo3


#![allow(unused)]
fn main() {
use pyo3::prelude::*;

#[pyclass]
struct MyClass {
   num: u32,
}

#[pymethods]
impl MyClass {
    #[new]
    fn new(num: u32) -> Self {
        MyClass { num }
    }

    fn half(&self) -> PyResult<u32> {
        Ok(self.num / 2)
    }
}
}

Ownership and lifetimes

While in rust-cpython you always own python objects, PyO3 allows efficient borrowed objects and most APIs are available with references.

Here is an example of the PyList API:

rust-cpython

impl PyList {

   fn new(py: Python) -> PyList {...}

   fn get_item(&self, py: Python, index: isize) -> PyObject {...}
}

pyo3

impl PyList {

   fn new(py: Python) -> &PyList {...}

   fn get_item(&self, index: isize) -> &PyAny {...}
}

In PyO3, all object references are bounded by the GIL lifetime. So the owned Python object is not required, and it is safe to have functions like fn py<'p>(&'p self) -> Python<'p> {}.

Error handling

rust-cpython requires a Python parameter for constructing a PyErr, so error handling ergonomics is pretty bad. It is not possible to use ? with Rust errors.

PyO3 on other hand does not require Python for constructing a PyErr, it is only required if you want to raise an exception in Python with the PyErr::restore() method. Due to various std::convert::From<E> for PyErr implementations for Rust standard error types E, propagating ? is supported automatically.

Using in Python a Rust function with trait bounds

PyO3 allows for easy conversion from Rust to Python for certain functions and classes (see the conversion table. However, it is not always straightforward to convert Rust code that requires a given trait implementation as an argument.

This tutorial explains how to convert a Rust function that takes a trait as argument for use in Python with classes implementing the same methods as the trait.

Why is this useful?

Pros

  • Make your Rust code available to Python users
  • Code complex algorithms in Rust with the help of the borrow checker

Cons

  • Not as fast as native Rust (type conversion has to be performed and one part of the code runs in Python)
  • You need to adapt your code to expose it

Example

Let's work with the following basic example of an implementation of a optimization solver operating on a given model.

Let's say we have a function solve that operates on a model and mutates its state. The argument of the function can be any model that implements the Model trait :


#![allow(unused)]
fn main() {
pub trait Model {
  fn set_variables(&mut self, inputs: &Vec<f64>);
  fn compute(&mut self);
  fn get_results(&self) -> Vec<f64>;
}

pub fn solve<T: Model>(model: &mut T) {
  println!("Magic solver that mutates the model into a resolved state");
}
}

Let's assume we have the following constraints:

  • We cannot change that code as it runs on many Rust models.
  • We also have many Python models that cannot be solved as this solver is not available in that language. Rewriting it in Python would be cumbersome and error-prone, as everything is already available in Rust.

How could we expose this solver to Python thanks to PyO3 ?

Implementation of the trait bounds for the Python class

If a Python class implements the same three methods as the Model trait, it seems logical it could be adapted to use the solver. However, it is not possible to pass a PyObject to it as it does not implement the Rust trait (even if the Python model has the required methods).

In order to implement the trait, we must write a wrapper around the calls in Rust to the Python model. The method signatures must be the same as the trait, keeping in mind that the Rust trait cannot be changed for the purpose of making the code available in Python.

The Python model we want to expose is the following one, which already contains all the required methods:

class Model:
    def set_variables(self, inputs):
        self.inputs = inputs
    def compute(self):
        self.results = [elt**2 - 3 for elt in self.inputs]
    def get_results(self):
        return self.results

The following wrapper will call the Python model from Rust, using a struct to hold the model as a PyAny object:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::PyAny;

pub trait Model {
  fn set_variables(&mut self, inputs: &Vec<f64>);
  fn compute(&mut self);
  fn get_results(&self) -> Vec<f64>;
}

struct UserModel {
    model: Py<PyAny>,
}

impl Model for UserModel {
    fn set_variables(&mut self, var: &Vec<f64>) {
        println!("Rust calling Python to set the variables");
        Python::with_gil(|py| {
            let values: Vec<f64> = var.clone();
            let list: PyObject = values.into_py(py);
            let py_model = self.model.as_ref(py);
            py_model
                .call_method("set_variables", (list,), None)
                .unwrap();
        })
    }

    fn get_results(&self) -> Vec<f64> {
        println!("Rust calling Python to get the results");
        Python::with_gil(|py| {
            self.model
                .as_ref(py)
                .call_method("get_results", (), None)
                .unwrap()
                .extract()
                .unwrap()
        })
    }

    fn compute(&mut self) {
        println!("Rust calling Python to perform the computation");
        Python::with_gil(|py| {
            self.model
                .as_ref(py)
                .call_method("compute", (), None)
                .unwrap();
        })
    }
}
}

Now that this bit is implemented, let's expose the model wrapper to Python. Let's add the PyO3 annotations and add a constructor:


#![allow(unused)]
fn main() {
pub trait Model {
  fn set_variables(&mut self, inputs: &Vec<f64>);
  fn compute(&mut self);
  fn get_results(&self) -> Vec<f64>;
}
use pyo3::prelude::*;
use pyo3::types::PyAny;

#[pyclass]
struct UserModel {
    model: Py<PyAny>,
}

#[pymodule]
fn trait_exposure(_py: Python, m: &PyModule) -> PyResult<()> {
    m.add_class::<UserModel>()?;
    Ok(())
}

#[pymethods]
impl UserModel {
    #[new]
    pub fn new(model: Py<PyAny>) -> Self {
        UserModel { model }
    }
}
}

Now we add the PyO3 annotations to the trait implementation:

#[pymethods]
impl Model for UserModel {
  // the previous trait implementation
}

However, the previous code will not compile. The compilation error is the following one: error: #[pymethods] cannot be used on trait impl blocks

That's a bummer! However, we can write a second wrapper around these functions to call them directly. This wrapper will also perform the type conversions between Python and Rust.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::PyAny;

pub trait Model {
  fn set_variables(&mut self, inputs: &Vec<f64>);
  fn compute(&mut self);
  fn get_results(&self) -> Vec<f64>;
}

#[pyclass]
struct UserModel {
    model: Py<PyAny>,
}

impl Model for UserModel {
 fn set_variables(&mut self, var: &Vec<f64>) {
     println!("Rust calling Python to set the variables");
     Python::with_gil(|py| {
         let values: Vec<f64> = var.clone();
         let list: PyObject = values.into_py(py);
         let py_model = self.model.as_ref(py);
         py_model
             .call_method("set_variables", (list,), None)
             .unwrap();
     })
 }

 fn get_results(&self) -> Vec<f64> {
     println!("Rust calling Python to get the results");
     Python::with_gil(|py| {
         self.model
             .as_ref(py)
             .call_method("get_results", (), None)
             .unwrap()
             .extract()
             .unwrap()
     })
 }

 fn compute(&mut self) {
     println!("Rust calling Python to perform the computation");
     Python::with_gil(|py| {
         self.model
             .as_ref(py)
             .call_method("compute", (), None)
             .unwrap();
     })

 }
}

#[pymethods]
impl UserModel {
    pub fn set_variables(&mut self, var: Vec<f64>) {
        println!("Set variables from Python calling Rust");
        Model::set_variables(self, &var)
    }

    pub fn get_results(&mut self) -> Vec<f64> {
        println!("Get results from Python calling Rust");
        Model::get_results(self)
    }

    pub fn compute(&mut self) {
        println!("Compute from Python calling Rust");
        Model::compute(self)
    }
}
}

This wrapper handles the type conversion between the PyO3 requirements and the trait. In order to meet PyO3 requirements, this wrapper must:

  • return an object of type PyResult
  • use only values, not references in the method signatures

Let's run the file python file:

class Model:
    def set_variables(self, inputs):
        self.inputs = inputs
    def compute(self):
        self.results = [elt**2 - 3 for elt in self.inputs]
    def get_results(self):
        return self.results

if __name__=="__main__":
  import trait_exposure

  myModel = Model()
  my_rust_model = trait_exposure.UserModel(myModel)
  my_rust_model.set_variables([2.0])
  print("Print value from Python: ", myModel.inputs)
  my_rust_model.compute()
  print("Print value from Python through Rust: ", my_rust_model.get_results())
  print("Print value directly from Python: ", myModel.get_results())

This outputs:

Set variables from Python calling Rust
Set variables from Rust calling Python
Print value from Python:  [2.0]
Compute from Python calling Rust
Compute from Rust calling Python
Get results from Python calling Rust
Get results from Rust calling Python
Print value from Python through Rust:  [1.0]
Print value directly from Python:  [1.0]

We have now successfully exposed a Rust model that implements the Model trait to Python!

We will now expose the solve function, but before, let's talk about types errors.

Type errors in Python

What happens if you have type errors when using Python and how can you improve the error messages?

Wrong types in Python function arguments

Let's assume in the first case that you will use in your Python file my_rust_model.set_variables(2.0) instead of my_rust_model.set_variables([2.0]).

The Rust signature expects a vector, which corresponds to a list in Python. What happens if instead of a vector, we pass a single value ?

At the execution of Python, we get :

File "main.py", line 15, in <module>
   my_rust_model.set_variables(2)
TypeError

It is a type error and Python points to it, so it's easy to identify and solve.

Wrong types in Python method signatures

Let's assume now that the return type of one of the methods of our Model class is wrong, for example the get_results method that is expected to return a Vec<f64> in Rust, a list in Python.

class Model:
    def set_variables(self, inputs):
        self.inputs = inputs
    def compute(self):
        self.results = [elt**2 -3 for elt in self.inputs]
    def get_results(self):
        return self.results[0]
        #return self.results <-- this is the expected output

This call results in the following panic:

pyo3_runtime.PanicException: called `Result::unwrap()` on an `Err` value: PyErr { type: Py(0x10dcf79f0, PhantomData) }

This error code is not helpful for a Python user that does not know anything about Rust, or someone that does not know PyO3 was used to interface the Rust code.

However, as we are responsible for making the Rust code available to Python, we can do something about it.

The issue is that we called unwrap anywhere we could, and therefore any panic from PyO3 will be directly forwarded to the end user.

Let's modify the code performing the type conversion to give a helpful error message to the Python user:

We used in our get_results method the following call that performs the type conversion:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::PyAny;

pub trait Model {
  fn set_variables(&mut self, inputs: &Vec<f64>);
  fn compute(&mut self);
  fn get_results(&self) -> Vec<f64>;
}

#[pyclass]
struct UserModel {
    model: Py<PyAny>,
}

impl Model for UserModel {
    fn get_results(&self) -> Vec<f64> {
        println!("Rust calling Python to get the results");
        Python::with_gil(|py| {
            self.model
                .as_ref(py)
                .call_method("get_results", (), None)
                .unwrap()
                .extract()
                .unwrap()
        })
    }
    fn set_variables(&mut self, var: &Vec<f64>) {
        println!("Rust calling Python to set the variables");
        Python::with_gil(|py| {
            let values: Vec<f64> = var.clone();
            let list: PyObject = values.into_py(py);
            let py_model = self.model.as_ref(py);
            py_model
                .call_method("set_variables", (list,), None)
                .unwrap();
        })
    }

    fn compute(&mut self) {
        println!("Rust calling Python to perform the computation");
        Python::with_gil(|py| {
            self.model
                .as_ref(py)
                .call_method("compute", (), None)
                .unwrap();
        })
    }
}
}

Let's break it down in order to perform better error handling:


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::PyAny;

pub trait Model {
  fn set_variables(&mut self, inputs: &Vec<f64>);
  fn compute(&mut self);
  fn get_results(&self) -> Vec<f64>;
}

#[pyclass]
struct UserModel {
    model: Py<PyAny>,
}

impl Model for UserModel {
    fn get_results(&self) -> Vec<f64> {
        println!("Get results from Rust calling Python");
        Python::with_gil(|py| {
            let py_result: &PyAny = self
                .model
                .as_ref(py)
                .call_method("get_results", (), None)
                .unwrap();

            if py_result.get_type().name().unwrap() != "list" {
                panic!("Expected a list for the get_results() method signature, got {}", py_result.get_type().name().unwrap());
            }
            py_result.extract()
        })
        .unwrap()
    }
    fn set_variables(&mut self, var: &Vec<f64>) {
        println!("Rust calling Python to set the variables");
        Python::with_gil(|py| {
            let values: Vec<f64> = var.clone();
            let list: PyObject = values.into_py(py);
            let py_model = self.model.as_ref(py);
            py_model
                .call_method("set_variables", (list,), None)
                .unwrap();
        })
    }

    fn compute(&mut self) {
        println!("Rust calling Python to perform the computation");
        Python::with_gil(|py| {
            self.model
                .as_ref(py)
                .call_method("compute", (), None)
                .unwrap();
        })
    }
}
}

By doing so, you catch the result of the Python computation and check its type in order to be able to deliver a better error message before performing the unwrapping.

Of course, it does not cover all the possible wrong outputs: the user could return a list of strings instead of a list of floats. In this case, a runtime panic would still occur due to PyO3, but with an error message much more difficult to decipher for non-rust user.

It is up to the developer exposing the rust code to decide how much effort to invest into Python type error handling and improved error messages.

The final code

Now let's expose the solve() function to make it available from Python.

It is not possible to directly expose the solve function to Python, as the type conversion cannot be performed. It requires an object implementing the Model trait as input.

However, the UserModel already implements this trait. Because of this, we can write a function wrapper that takes the UserModel--which has already been exposed to Python--as an argument in order to call the core function solve.

It is also required to make the struct public.


#![allow(unused)]
fn main() {
use pyo3::prelude::*;
use pyo3::types::PyAny;

pub trait Model {
    fn set_variables(&mut self, var: &Vec<f64>);
    fn get_results(&self) -> Vec<f64>;
    fn compute(&mut self);
}

pub fn solve<T: Model>(model: &mut T) {
  println!("Magic solver that mutates the model into a resolved state");
}

#[pyfunction]
#[pyo3(name = "solve")]
pub fn solve_wrapper(model: &mut UserModel) {
    solve(model);
}

#[pyclass]
pub struct UserModel {
    model: Py<PyAny>,
}

#[pymodule]
fn trait_exposure(_py: Python, m: &PyModule) -> PyResult<()> {
    m.add_class::<UserModel>()?;
    m.add_function(wrap_pyfunction!(solve_wrapper, m)?)?;
    Ok(())
}

#[pymethods]
impl UserModel {
    #[new]
    pub fn new(model: Py<PyAny>) -> Self {
        UserModel { model }
    }

    pub fn set_variables(&mut self, var: Vec<f64>) {
        println!("Set variables from Python calling Rust");
        Model::set_variables(self, &var)
    }

    pub fn get_results(&mut self) -> Vec<f64> {
        println!("Get results from Python calling Rust");
        Model::get_results(self)
    }

    pub fn compute(&mut self) {
        Model::compute(self)
    }
}

impl Model for UserModel {
    fn set_variables(&mut self, var: &Vec<f64>) {
        println!("Rust calling Python to set the variables");
        Python::with_gil(|py| {
            let values: Vec<f64> = var.clone();
            let list: PyObject = values.into_py(py);
            let py_model = self.model.as_ref(py);
            py_model
                .call_method("set_variables", (list,), None)
                .unwrap();
        })
    }

    fn get_results(&self) -> Vec<f64> {
        println!("Get results from Rust calling Python");
        Python::with_gil(|py| {
            let py_result: &PyAny = self
                .model
                .as_ref(py)
                .call_method("get_results", (), None)
                .unwrap();

            if py_result.get_type().name().unwrap() != "list" {
                panic!("Expected a list for the get_results() method signature, got {}", py_result.get_type().name().unwrap());
            }
            py_result.extract()
        })
        .unwrap()
    }

    fn compute(&mut self) {
        println!("Rust calling Python to perform the computation");
        Python::with_gil(|py| {
            self.model
                .as_ref(py)
                .call_method("compute", (), None)
                .unwrap();
        })
    }
}
}

Changelog

All notable changes to this project will be documented in this file. For help with updating to new PyO3 versions, please see the migration guide.

The format is based on Keep a Changelog and this project adheres to Semantic Versioning.

0.14.3 - 2021-08-22

Added

  • Add PyString::data() to access the raw bytes stored in a Python string. #1794

Fixed

  • Raise AttributeError to avoid panic when calling del on a #[setter] defined class property. #1779
  • Restrict FFI definitions PyGILState_Check and Py_tracefunc to the unlimited API. #1787
  • Add missing _type field to PyStatus struct definition. #1791
  • Reduce lower bound num-complex optional dependency to support interop with rust-numpy and ndarray when building with the MSRV of 1.41 #1799
  • Fix memory leak in Python::run_code. #1806
  • Fix memory leak in PyModule::from_code. #1810
  • Remove use of pyo3:: in pyo3::types::datetime which broke builds using -Z avoid-dev-deps #1811

0.14.2 - 2021-08-09

Added

  • Add indexmap feature to add ToPyObject, IntoPy and FromPyObject implementations for indexmap::IndexMap. #1728
  • Add pyo3_build_config::add_extension_module_link_args() to use in build scripts to set linker arguments (for macOS). #1755
  • Add Python::with_gil_unchecked() unsafe variation of Python::with_gil() to allow obtaining a Python in scenarios where Python::with_gil() would fail. #1769

Changed

  • PyErr::new no longer acquires the Python GIL internally. #1724
  • Reverted PyO3 0.14.0's use of cargo:rustc-cdylib-link-arg in its build script, as Cargo unintentionally allowed crates to pass linker args to downstream crates in this way. Projects supporting macOS may need to restore .cargo/config.toml files. #1755

Fixed

  • Fix regression in 0.14.0 rejecting usage of #[doc(hidden)] on structs and functions annotated with PyO3 macros. #1722
  • Fix regression in 0.14.0 leading to incorrect code coverage being computed for #[pyfunction]s. #1726
  • Fix incorrect FFI definition of Py_Buffer on PyPy. #1737
  • Fix incorrect calculation of dictoffset on 32-bit Windows. #1475
  • Fix regression in 0.13.2 leading to linking to incorrect Python library on Windows "gnu" targets. #1759
  • Fix compiler warning: deny trailing semicolons in expression macro. #1762
  • Fix incorrect FFI definition of Py_DecodeLocale. The 2nd argument is now *mut Py_ssize_t instead of Py_ssize_t. #1766

0.14.1 - 2021-07-04

Added

  • Implement IntoPy<PyObject> for &PathBuf and &OsString. #1712

Fixed

  • Fix crashes on PyPy due to incorrect definitions of PyList_SET_ITEM. #1713

0.14.0 - 2021-07-03

Packaging

  • Update num-bigint optional dependency to 0.4. #1481
  • Update num-complex optional dependency to 0.4. #1482
  • Extend hashbrown optional dependency supported versions to include 0.11. #1496
  • Support PyPy 3.7. #1538

Added

  • Extend conversions for [T; N] to all N using const generics (on Rust 1.51 and up). #1128
  • Add conversions between OsStr/ OsString and Python strings. #1379
  • Add conversions between Path/ PathBuf and Python strings (and pathlib.Path objects). #1379 #1654
  • Add a new set of #[pyo3(...)] attributes to control various PyO3 macro functionality:
    • #[pyo3(from_py_with = "...")] function arguments and struct fields to override the default from-Python conversion. #1411
    • #[pyo3(name = "...")] for setting Python names. #1567
    • #[pyo3(text_signature = "...")] for setting text signature. #1658
  • Add FFI definition PyCFunction_CheckExact for Python 3.9 and later. #1425
  • Add FFI definition Py_IS_TYPE. #1429
  • Add FFI definition _Py_InitializeMain. #1473
  • Add FFI definitions from cpython/import.h.#1475
  • Add tuple and unit struct support for #[pyclass] macro. #1504
  • Add FFI definition PyDateTime_TimeZone_UTC. #1572
  • Add support for #[pyclass(extends=Exception)]. #1591
  • Add PyErr::cause and PyErr::set_cause. #1679
  • Add FFI definitions from cpython/pystate.h. #1687
  • Add wrap_pyfunction! macro to pyo3::prelude. #1695

Changed

  • Allow only one #[pymethods] block per #[pyclass] by default, to remove the dependency on inventory. Add a multiple-pymethods feature to opt-in the original behavior and dependency on inventory. #1457
  • Change PyTimeAccess::get_fold to return a bool instead of a u8. #1397
  • Deprecate FFI definition PyCFunction_Call for Python 3.9 and up. #1425
  • Deprecate FFI definition PyModule_GetFilename. #1425
  • The auto-initialize feature is no longer enabled by default. #1443
  • Change PyCFunction::new() and PyCFunction::new_with_keywords() to take &'static str arguments rather than implicitly copying (and leaking) them. #1450
  • Deprecate PyModule::call, PyModule::call0, PyModule::call1 and PyModule::get. #1492
  • Add length information to PyBufferErrors raised from PyBuffer::copy_to_slice and PyBuffer::copy_from_slice. #1534
  • Automatically set -undefined and dynamic_lookup linker arguments on macOS with the extension-module feature. #1539
  • Deprecate #[pyproto] methods which are easier to implement as #[pymethods]: #1560
    • PyBasicProtocol::__bytes__ and PyBasicProtocol::__format__
    • PyContextProtocol::__enter__ and PyContextProtocol::__exit__
    • PyDescrProtocol::__delete__ and PyDescrProtocol::__set_name__
    • PyMappingProtocol::__reversed__
    • PyNumberProtocol::__complex__ and PyNumberProtocol::__round__
    • PyAsyncProtocol::__aenter__ and PyAsyncProtocol::__aexit__
  • Deprecate several attributes in favor of the new #[pyo3(...)] options:
    • #[name = "..."], replaced by #[pyo3(name = "...")] #1567
    • #[pyfn(m, "name")], replaced by #[pyfn(m)] #[pyo3(name = "...")]. #1610
    • #[pymodule(name)], replaced by #[pymodule] #[pyo3(name = "...")] #1650
    • #[text_signature = "..."], replaced by #[pyo3(text_signature = "...")]. #1658
  • Reduce LLVM line counts to improve compilation times. #1604
  • No longer call PyEval_InitThreads in #[pymodule] init code. #1630
  • Use METH_FASTCALL argument passing convention, when possible, to improve #[pyfunction] and method performance. #1619, #1660
  • Filter sysconfigdata candidates by architecture when cross-compiling. #1626

Removed

  • Remove deprecated exception names BaseException etc. #1426
  • Remove deprecated methods Python::is_instance, Python::is_subclass, Python::release, Python::xdecref, and Py::from_owned_ptr_or_panic. #1426
  • Remove many FFI definitions which never existed in the Python C-API:
    • (previously deprecated) PyGetSetDef_INIT, PyGetSetDef_DICT, PyCoro_Check, PyCoroWrapper_Check, and PyAsyncGen_Check #1426
    • PyMethodDef_INIT #1426
    • PyTypeObject_INIT #1429
    • PyObject_Check, PySuper_Check, and FreeFunc #1438
    • PyModuleDef_INIT #1630
  • Remove pyclass implementation details from PyTypeInfo:
    • Type, DESCRIPTION, and FLAGS #1456
    • BaseType, BaseLayout, Layout, Initializer #1596
  • Remove PYO3_CROSS_INCLUDE_DIR environment variable and the associated C header parsing functionality. #1521
  • Remove raw_pycfunction! macro. #1619
  • Remove PyClassAlloc trait. #1657
  • Remove PyList::get_parked_item. #1664

Fixed

  • Remove FFI definition PyCFunction_ClearFreeList for Python 3.9 and later. #1425
  • PYO3_CROSS_LIB_DIR enviroment variable no long required when compiling for x86-64 Python from macOS arm64 and reverse. #1428
  • Fix FFI definition _PyEval_RequestCodeExtraIndex, which took an argument of the wrong type. #1429
  • Fix FFI definition PyIndex_Check missing with the abi3 feature. #1436
  • Fix incorrect TypeError raised when keyword-only argument passed along with a positional argument in *args. #1440
  • Fix inability to use a named lifetime for &PyTuple of *args in #[pyfunction]. #1440
  • Fix use of Python argument for #[pymethods] inside macro expansions. #1505
  • No longer include __doc__ in __all__ generated for #[pymodule]. #1509
  • Always use cross-compiling configuration if any of the PYO3_CROSS family of environment variables are set. #1514
  • Support EnvironmentError, IOError, and WindowsError on PyPy. #1533
  • Fix unneccessary rebuilds when cycling between cargo check and cargo clippy in a Python virtualenv. #1557
  • Fix segfault when dereferencing ffi::PyDateTimeAPI without the GIL. #1563
  • Fix memory leak in FromPyObject implementations for u128 and i128. #1638
  • Fix #[pyclass(extends=PyDict)] leaking the dict contents on drop. #1657
  • Fix segfault when calling PyList::get_item with negative indices. #1668
  • Fix FFI definitions of PyEval_SetProfile/PyEval_SetTrace to take Option<Py_tracefunc> parameters. #1692
  • Fix ToPyObject impl for HashSet to accept non-default hashers. #1702

0.13.2 - 2021-02-12

Packaging

  • Lower minimum supported Rust version to 1.41. #1421

Added

  • Add unsafe API with_embedded_python_interpreter to initalize a Python interpreter, execute a closure, and finalize the interpreter. #1355
  • Add serde feature which provides implementations of Serialize and Deserialize for Py<T>. #1366
  • Add FFI definition _PyCFunctionFastWithKeywords on Python 3.7 and up. #1384
  • Add PyDateTime::new_with_fold() method. #1398
  • Add size_hint impls for {PyDict,PyList,PySet,PyTuple}Iterators. #1699

Changed

  • prepare_freethreaded_python will no longer register an atexit handler to call Py_Finalize. This resolves a number of issues with incompatible C extensions causing crashes at finalization. #1355
  • Mark PyLayout::py_init, PyClassDict::clear_dict, and opt_to_pyobj safe, as they do not perform any unsafe operations. #1404

Fixed

  • Fix support for using r#raw_idents as argument names in pyfunctions. #1383
  • Fix typo in FFI definition for PyFunction_GetCode (was incorrectly PyFunction_Code). #1387
  • Fix FFI definitions PyMarshal_WriteObjectToString and PyMarshal_ReadObjectFromString as available in limited API. #1387
  • Fix FFI definitions PyListObject and those from funcobject.h as requiring non-limited API. #1387
  • Fix unqualified Result usage in pyobject_native_type_base. #1402
  • Fix build on systems where the default Python encoding is not UTF-8. #1405
  • Fix build on mingw / MSYS2. #1423

0.13.1 - 2021-01-10

Added

  • Add support for #[pyclass(dict)] and #[pyclass(weakref)] with the abi3 feature on Python 3.9 and up. #1342
  • Add FFI definitions PyOS_BeforeFork, PyOS_AfterFork_Parent, PyOS_AfterFork_Child for Python 3.7 and up. #1348
  • Add an auto-initialize feature to control whether PyO3 should automatically initialize an embedded Python interpreter. For compatibility this feature is enabled by default in PyO3 0.13.1, but is planned to become opt-in from PyO3 0.14.0. #1347
  • Add support for cross-compiling to Windows without needing PYO3_CROSS_INCLUDE_DIR. #1350

Deprecated

  • Deprecate FFI definitions PyEval_CallObjectWithKeywords, PyEval_CallObject, PyEval_CallFunction, PyEval_CallMethod when building for Python 3.9. #1338
  • Deprecate FFI definitions PyGetSetDef_DICT and PyGetSetDef_INIT which have never been in the Python API. #1341
  • Deprecate FFI definitions PyGen_NeedsFinalizing, PyImport_Cleanup (removed in 3.9), and PyOS_InitInterrupts (3.10). #1348
  • Deprecate FFI definition PyOS_AfterFork for Python 3.7 and up. #1348
  • Deprecate FFI definitions PyCoro_Check, PyAsyncGen_Check, and PyCoroWrapper_Check, which have never been in the Python API (for the first two, it is possible to use PyCoro_CheckExact and PyAsyncGen_CheckExact instead; these are the actual functions provided by the Python API). #1348
  • Deprecate FFI definitions for PyUnicode_FromUnicode, PyUnicode_AsUnicode and PyUnicode_AsUnicodeAndSize, which will be removed from 3.12 and up due to PEP 613. #1370

Removed

  • Remove FFI definition PyFrame_ClearFreeList when building for Python 3.9. #1341
  • Remove FFI definition _PyDict_Contains when building for Python 3.10. #1341
  • Remove FFI definitions PyGen_NeedsFinalizing and PyImport_Cleanup (for 3.9 and up), and PyOS_InitInterrupts (3.10). #1348

Fixed

  • Stop including Py_TRACE_REFS config setting automatically if Py_DEBUG is set on Python 3.8 and up. #1334
  • Remove #[deny(warnings)] attribute (and instead refuse warnings only in CI). #1340
  • Fix deprecation warning for missing __module__ with #[pyclass]. #1343
  • Correct return type of PyFrozenSet::empty to &PyFrozenSet (was incorrectly &PySet). #1351
  • Fix missing Py_INCREF on heap type objects on Python versions before 3.8. #1365

0.13.0 - 2020-12-22

Packaging

  • Drop support for Python 3.5 (as it is now end-of-life). #1250
  • Bump minimum supported Rust version to 1.45. #1272
  • Bump indoc dependency to 1.0. #1272
  • Bump paste dependency to 1.0. #1272
  • Rename internal crates pyo3cls and pyo3-derive-backend to pyo3-macros and pyo3-macros-backend respectively. #1317

Added

  • Add support for building for CPython limited API. Opting-in to the limited API enables a single extension wheel built with PyO3 to be installable on multiple Python versions. This required a few minor changes to runtime behaviour of of PyO3 #[pyclass] types. See the migration guide for full details. #1152
    • Add feature flags abi3-py36, abi3-py37, abi3-py38 etc. to set the minimum Python version when using the limited API. #1263
  • Add argument names to TypeError messages generated by pymethod wrappers. #1212
  • Add FFI definitions for PEP 587 "Python Initialization Configuration". #1247
  • Add FFI definitions for PyEval_SetProfile and PyEval_SetTrace. #1255
  • Add FFI definitions for context.h functions (PyContext_New, etc). #1259
  • Add PyAny::is_instance() method. #1276
  • Add support for conversion between char and PyString. #1282
  • Add FFI definitions for PyBuffer_SizeFromFormat, PyObject_LengthHint, PyObject_CallNoArgs, PyObject_CallOneArg, PyObject_CallMethodNoArgs, PyObject_CallMethodOneArg, PyObject_VectorcallDict, and PyObject_VectorcallMethod. #1287
  • Add conversions between u128/i128 and PyLong for PyPy. #1310
  • Add Python::version() and Python::version_info() to get the running interpreter version. #1322
  • Add conversions for tuples of length 10, 11, and 12. #1454

Changed

  • Change return type of PyType::name() from Cow<str> to PyResult<&str>. #1152
  • #[pyclass(subclass)] is now required for subclassing from Rust (was previously just required for subclassing from Python). #1152
  • Change PyIterator to be consistent with other native types: it is now used as &PyIterator instead of PyIterator<'a>. #1176
  • Change formatting of PyDowncastError messages to be closer to Python's builtin error messages. #1212
  • Change Debug and Display impls for PyException to be consistent with PyAny. #1275
  • Change Debug impl of PyErr to output more helpful information (acquiring the GIL if necessary). #1275
  • Rename PyTypeInfo::is_instance and PyTypeInfo::is_exact_instance to PyTypeInfo::is_type_of and PyTypeInfo::is_exact_type_of. #1278
  • Optimize PyAny::call0, Py::call0 and PyAny::call_method0 and Py::call_method0 on Python 3.9 and up. #1287
  • Require double-quotes for pyclass name argument e.g #[pyclass(name = "MyClass")]. #1303

Deprecated

  • Deprecate Python::is_instance, Python::is_subclass, Python::release, and Python::xdecref. #1292

Removed

  • Remove deprecated ffi definitions PyUnicode_AsUnicodeCopy, PyUnicode_GetMax, _Py_CheckRecursionLimit, PyObject_AsCharBuffer, PyObject_AsReadBuffer, PyObject_CheckReadBuffer and PyObject_AsWriteBuffer, which will be removed in Python 3.10. #1217
  • Remove unused python3 feature. #1235

Fixed

  • Fix missing field in PyCodeObject struct (co_posonlyargcount) - caused invalid access to other fields in Python >3.7. #1260
  • Fix building for x86_64-unknown-linux-musl target from x86_64-unknown-linux-gnu host. #1267
  • Fix #[text_signature] interacting badly with rust r#raw_identifiers. #1286
  • Fix FFI definitions for PyObject_Vectorcall and PyVectorcall_Call. #1287
  • Fix building with Anaconda python inside a virtualenv. #1290
  • Fix definition of opaque FFI types. #1312
  • Fix using custom error type in pyclass #[new] methods. #1319

0.12.4 - 2020-11-28

Fixed

  • Fix reference count bug in implementation of From<Py<T>> for PyObject, a regression introduced in PyO3 0.12. #1297

0.12.3 - 2020-10-12

Fixed

  • Fix support for Rust versions 1.39 to 1.44, broken by an incorrect internal update to paste 1.0 which was done in PyO3 0.12.2. #1234

0.12.2 - 2020-10-12

Added

  • Add support for keyword-only arguments without default values in #[pyfunction]. #1209
  • Add Python::check_signals() as a safe a wrapper for PyErr_CheckSignals(). #1214

Fixed

  • Fix invalid document for protocol methods. #1169
  • Hide docs of PyO3 private implementation details in pyo3::class::methods. #1169
  • Fix unnecessary rebuild on PATH changes when the python interpreter is provided by PYO3_PYTHON. #1231

0.12.1 - 2020-09-16

Fixed

  • Fix building for a 32-bit Python on 64-bit Windows with a 64-bit Rust toolchain. #1179
  • Fix building on platforms where c_char is u8. #1182

0.12.0 - 2020-09-12

Added

  • Add FFI definitions Py_FinalizeEx, PyOS_getsig, and PyOS_setsig. #1021
  • Add PyString::to_str for accessing PyString as &str. #1023
  • Add Python::with_gil for executing a closure with the Python GIL. #1037
  • Add type information to failures in PyAny::downcast(). #1050
  • Implement Debug for PyIterator. #1051
  • Add PyBytes::new_with and PyByteArray::new_with for initialising bytes and bytearray objects using a closure. #1074
  • Add #[derive(FromPyObject)] macro for enums and structs. #1065
  • Add Py::as_ref and Py::into_ref for converting Py<T> to &T. #1098
  • Add ability to return Result types other than PyResult from #[pyfunction], #[pymethod] and #[pyproto] functions. #1106.
  • Implement ToPyObject, IntoPy, and FromPyObject for hashbrown's HashMap and HashSet types (requires the hashbrown feature). #1114
  • Add #[pyfunction(pass_module)] and #[pyfn(pass_module)] to pass the module object as the first function argument. #1143
  • Add PyModule::add_function and PyModule::add_submodule as typed alternatives to PyModule::add_wrapped. #1143
  • Add native PyCFunction and PyFunction types. #1163

Changed

  • Rework exception types: #1024 #1115
    • Rename exception types from e.g. RuntimeError to PyRuntimeError. The old names continue to exist but are deprecated.
    • Exception objects are now accessible as &T or Py<T>, just like other Python-native types.
    • Rename PyException::py_err() to PyException::new_err().
    • Rename PyUnicodeDecodeErr::new_err() to PyUnicodeDecodeErr::new().
    • Remove PyStopIteration::stop_iteration().
  • Require T: Send for the return value T of Python::allow_threads. #1036
  • Rename PYTHON_SYS_EXECUTABLE to PYO3_PYTHON. The old name will continue to work (undocumented) but will be removed in a future release. #1039
  • Remove unsafe from signature of PyType::as_type_ptr. #1047
  • Change return type of PyIterator::from_object to PyResult<PyIterator> (was Result<PyIterator, PyDowncastError>). #1051
  • IntoPy is no longer implied by FromPy. #1063
  • Change PyObject to be a type alias for Py<PyAny>. #1063
  • Rework PyErr to be compatible with the std::error::Error trait: #1067 #1115
    • Implement Display, Error, Send and Sync for PyErr and PyErrArguments.
    • Add PyErr::instance() for accessing PyErr as &PyBaseException.
    • PyErr's fields are now an implementation detail. The equivalent values can be accessed with PyErr::ptype(), PyErr::pvalue() and PyErr::ptraceback().
    • Change receiver of PyErr::print() and PyErr::print_and_set_sys_last_vars() to &self (was self).
    • Remove PyErrValue, PyErr::from_value, PyErr::into_normalized(), and PyErr::normalize().
    • Remove PyException::into().
    • Remove Into<PyResult<T>> for PyErr and PyException.
  • Change methods generated by #[pyproto] to return NotImplemented if Python should try a reversed operation. #1072
  • Change argument to PyModule::add to impl IntoPy<PyObject> (was impl ToPyObject). #1124

Removed

  • Remove many exception and PyErr APIs; see the "changed" section above. #1024 #1067 #1115
  • Remove PyString::to_string (use new PyString::to_str). #1023
  • Remove PyString::as_bytes. #1023
  • Remove Python::register_any. #1023
  • Remove GILGuard::acquire from the public API. Use Python::acquire_gil or Python::with_gil. #1036
  • Remove the FromPy trait. #1063
  • Remove the AsPyRef trait. #1098

Fixed

  • Correct FFI definitions Py_SetProgramName and Py_SetPythonHome to take *const arguments (was *mut). #1021
  • Fix FromPyObject for num_bigint::BigInt for Python objects with an __index__ method. #1027
  • Correct FFI definition _PyLong_AsByteArray to take *mut c_uchar argument (was *const c_uchar). #1029
  • Fix segfault with #[pyclass(dict, unsendable)]. #1058 #1059
  • Fix using &Self as an argument type for functions in a #[pymethods] block. #1071
  • Fix best-effort build against PyPy 3.6. #1092
  • Fix many cases of lifetime elision in #[pyproto] implementations. #1093
  • Fix detection of Python build configuration when cross-compiling. #1095
  • Always link against libpython on android with the extension-module feature. #1095
  • Fix the + operator not trying __radd__ when both __add__ and __radd__ are defined in PyNumberProtocol (and similar for all other reversible operators). #1107
  • Fix building with Anaconda python. #1175

0.11.1 - 2020-06-30

Added

  • #[pyclass(unsendable)]. #1009

Changed

  • Update parking_lot dependency to 0.11. #1010

0.11.0 - 2020-06-28

Added

  • Support stable versions of Rust (>=1.39). #969
  • Add FFI definition PyObject_AsFileDescriptor. #938
  • Add PyByteArray::data, PyByteArray::as_bytes, and PyByteArray::as_bytes_mut. #967
  • Add GILOnceCell to use in situations where lazy_static or once_cell can deadlock. #975
  • Add Py::borrow, Py::borrow_mut, Py::try_borrow, and Py::try_borrow_mut for accessing #[pyclass] values. #976
  • Add IterNextOutput and IterANextOutput for returning from __next__ / __anext__. #997

Changed

  • Simplify internals of #[pyo3(get)] attribute. (Remove the hidden API GetPropertyValue.) #934
  • Call Py_Finalize at exit to flush buffers, etc. #943
  • Add type parameter to PyBuffer. #951
  • Require Send bound for #[pyclass]. #966
  • Add Python argument to most methods on PyObject and Py<T> to ensure GIL safety. #970
  • Change signature of PyTypeObject::type_object() - now takes Python argument and returns &PyType. #970
  • Change return type of PyTuple::slice() and PyTuple::split_from() from Py<PyTuple> to &PyTuple. #970
  • Change return type of PyTuple::as_slice to &[&PyAny]. #971
  • Rename PyTypeInfo::type_object to type_object_raw, and add Python argument. #975
  • Update num-complex optional dependendency from 0.2 to 0.3. #977
  • Update num-bigint optional dependendency from 0.2 to 0.3. #978
  • #[pyproto] is re-implemented without specialization. #961
  • PyClassAlloc::alloc is renamed to PyClassAlloc::new. #990
  • #[pyproto] methods can now have return value T or PyResult<T> (previously only PyResult<T> was supported). #996
  • #[pyproto] methods can now skip annotating the return type if it is (). #998

Removed

  • Remove ManagedPyRef (unused, and needs specialization) #930

Fixed

  • Fix passing explicit None to Option<T> argument #[pyfunction] with a default value. #936
  • Fix PyClass.__new__'s not respecting subclasses when inherited by a Python class. #990
  • Fix returning Option<T> from #[pyproto] methods. #996
  • Fix accepting PyRef<Self> and PyRefMut<Self> to #[getter] and #[setter] methods. #999

0.10.1 - 2020-05-14

Fixed

  • Fix deadlock in Python::acquire_gil() after dropping a PyObject or Py<T>. #924

0.10.0 - 2020-05-13

Added

  • Add FFI definition _PyDict_NewPresized. #849
  • Implement IntoPy<PyObject> for HashSet and BTreeSet. #864
  • Add PyAny::dir method. #886
  • Gate macros behind a macros feature (enabled by default). #897
  • Add ability to define class attributes using #[classattr] on functions in #[pymethods]. #905
  • Implement Clone for PyObject and Py<T>. #908
  • Implement Deref<Target = PyAny> for all builtin types. (PyList, PyTuple, PyDict etc.) #911
  • Implement Deref<Target = PyAny> for PyCell<T>. #911
  • Add #[classattr] support for associated constants in #[pymethods]. #914

Changed

  • Panics will now be raised as a Python PanicException. #797
  • Change PyObject and Py<T> reference counts to decrement immediately upon drop when the GIL is held. #851
  • Allow PyIterProtocol methods to use either PyRef or PyRefMut as the receiver type. #856
  • Change the implementation of FromPyObject for Py<T> to apply to a wider range of T, including all T: PyClass. #880
  • Move all methods from the ObjectProtocol trait to the PyAny struct. #911
  • Remove need for #![feature(specialization)] in crates depending on PyO3. #917

Removed

  • Remove PyMethodsProtocol trait. #889
  • Remove num-traits dependency. #895
  • Remove ObjectProtocol trait. #911
  • Remove PyAny::None. Users should use Python::None instead. #911
  • Remove all *ProtocolImpl traits. #917

Fixed

  • Fix support for __radd__ and other __r*__ methods as implementations for Python mathematical operators. #839
  • Fix panics during garbage collection when traversing objects that were already mutably borrowed. #855
  • Prevent &'static references to Python objects as arguments to #[pyfunction] and #[pymethods]. #869
  • Fix lifetime safety bug with AsPyRef::as_ref. #876
  • Fix #[pyo3(get)] attribute on Py<T> fields. #880
  • Fix segmentation faults caused by functions such as PyList::get_item returning borrowed objects when it was not safe to do so. #890
  • Fix segmentation faults caused by nested Python::acquire_gil calls creating dangling references. #893
  • Fix segmentatation faults when a panic occurs during a call to Python::allow_threads. #912

0.9.2 - 2020-04-09

Added

  • FromPyObject implementations for HashSet and BTreeSet. #842

Fixed

  • Correctly detect 32bit architecture. #830

0.9.1 - 2020-03-23

Fixed

  • Error messages for #[pyclass]. #826
  • FromPyObject implementation for PySequence. #827

0.9.0 - 2020-03-19

Added

  • PyCell, which has RefCell-like features. #770
  • PyClass, PyLayout, PyClassInitializer. #683
  • Implemented IntoIterator for PySet and PyFrozenSet. #716
  • FromPyObject is now automatically implemented for T: Clone pyclasses. #730
  • #[pyo3(get)] and #[pyo3(set)] will now use the Rust doc-comment from the field for the Python property. #755
  • #[setter] functions may now take an argument of Pyo3::Python. #760
  • PyTypeInfo::BaseLayout and PyClass::BaseNativeType. #770
  • PyDowncastImpl. #770
  • Implement FromPyObject and IntoPy<PyObject> traits for arrays (up to 32). #778
  • migration.md and types.md in the guide. #795, #802
  • ffi::{_PyBytes_Resize, _PyDict_Next, _PyDict_Contains, _PyDict_GetDictPtr}. #820

Changed

  • #[new] does not take PyRawObject and can return Self. #683
  • The blanket implementations for FromPyObject for &T and &mut T are no longer specializable. Implement PyTryFrom for your type to control the behavior of FromPyObject::extract() for your types. #713
  • The implementation for IntoPy<U> for T where U: FromPy<T> is no longer specializable. Control the behavior of this via the implementation of FromPy. #713
  • Use parking_lot::Mutex instead of spin::Mutex. #734
  • Bumped minimum Rust version to 1.42.0-nightly 2020-01-21. #761
  • PyRef and PyRefMut are renewed for PyCell. #770
  • Some new FFI functions for Python 3.8. #784
  • PyAny is now on the top level module and prelude. #816

Removed

  • PyRawObject. #683
  • PyNoArgsFunction. #741
  • initialize_type(). To set the module name for a #[pyclass], use the module argument to the macro. #751
  • AsPyRef::as_mut/with/with_mut/into_py/into_mut_py. #770
  • PyTryFrom::try_from_mut/try_from_mut_exact/try_from_mut_unchecked. #770
  • Python::mut_from_owned_ptr/mut_from_borrowed_ptr. #770
  • ObjectProtocol::get_base/get_mut_base. #770

Fixed

  • Fixed unsoundness of subclassing. #683.
  • Clear error indicator when the exception is handled on the Rust side. #719
  • Usage of raw identifiers with #[pyo3(set)]. #745
  • Usage of PyObject with #[pyo3(get)]. #760
  • #[pymethods] used in conjunction with #[cfg]. #769
  • "*" in a #[pyfunction()] argument list incorrectly accepting any number of positional arguments (use args = "*" when this behaviour is desired). #792
  • PyModule::dict. #809
  • Fix the case where DESCRIPTION is not null-terminated. #822

[0.8.5] - 2020-01-05

Added

  • Implemented FromPyObject for HashMap and BTreeMap
  • Support for #[name = "foo"] attribute for #[pyfunction] and in #[pymethods]. #692

0.8.4 - 2019-12-14

Added

  • Support for #[text_signature] attribute. #675

0.8.3 - 2019-11-23

Removed

  • #[init] is removed. #658

Fixed

  • Now all &Py~ types have !Send bound. #655
  • Fix a compile error raised by the stabilization of ! type. #672.

0.8.2 - 2019-10-27

Added

  • FFI compatibility for PEP 590 Vectorcall. #641

Fixed

  • Fix PySequenceProtocol::set_item. #624
  • Fix a corner case of BigInt::FromPyObject. #630
  • Fix index errors in parameter conversion. #631
  • Fix handling of invalid utf-8 sequences in PyString::as_bytes. #639 and PyString::to_string_lossy #642.
  • Remove __contains__ and __iter__ from PyMappingProtocol. #644
  • Fix proc-macro definition of PySetAttrProtocol. #645

0.8.1 - 2019-10-08

Added

Fixed

  • Make sure the right Python interpreter is used in OSX builds. #604
  • Patch specialization being broken by Rust 1.40. #614
  • Fix a segfault around PyErr. #597

0.8.0 - 2019-09-16

Added

  • module argument to pyclass macro. #499
  • py_run! macro #512
  • Use existing fields and methods before calling custom getattr. #505
  • PyBytes can now be indexed just like Vec<u8>
  • Implement IntoPy<PyObject> for PyRef and PyRefMut.

Changed

  • Implementing the Using the gc parameter for pyclass (e.g. #[pyclass(gc)]) without implementing the class::PyGCProtocol trait is now a compile-time error. Failing to implement this trait could lead to segfaults. #532
  • PyByteArray::data has been replaced with PyDataArray::to_vec because returning a &[u8] is unsound. (See this comment for a great write-up for why that was unsound)
  • Replace mashup with paste.
  • GILPool gained a Python marker to prevent it from being misused to release Python objects without the GIL held.

Removed

  • IntoPyObject was replaced with IntoPy<PyObject>
  • #[pyclass(subclass)] is hidden a unsound-subclass feature because it's causing segmentation faults.

Fixed

  • More readable error message for generics in pyclass #503

0.7.0 - 2019-05-26

Added

  • PyPy support by omerbenamram in #393
  • Have PyModule generate an index of its members (__all__ list).
  • Allow slf: PyRef<T> for pyclass(#419)
  • Allow to use lifetime specifiers in pymethods
  • Add marshal module. #460

Changed

  • Python::run returns PyResult<()> instead of PyResult<&PyAny>.
  • Methods decorated with #[getter] and #[setter] can now omit wrapping the result type in PyResult if they don't raise exceptions.

Fixed

  • type_object::PyTypeObject has been marked unsafe because breaking the contract type_object::PyTypeObject::init_type can lead to UB.
  • Fixed automatic derive of PySequenceProtocol implementation in #423.
  • Capitalization & better wording to README.md.
  • Docstrings of properties is now properly set using the doc of the #[getter] method.
  • Fixed issues with pymethods crashing on doc comments containing double quotes.
  • PySet::new and PyFrozenSet::new now return PyResult<&Py[Frozen]Set>; exceptions are raised if the items are not hashable.
  • Fixed building using venv on Windows.
  • PyTuple::new now returns &PyTuple instead of Py<PyTuple>.
  • Fixed several issues with argument parsing; notable, the *args and **kwargs tuple/dict now doesn't contain arguments that are otherwise assigned to parameters.

0.6.0 - 2019-03-28

Regressions

  • Currently, #341 causes cargo test to fail with weird linking errors when the extension-module feature is activated. For now you can work around this by making the extension-module feature optional and running the tests with cargo test --no-default-features:
[dependencies.pyo3]
version = "0.6.0"

[features]
extension-module = ["pyo3/extension-module"]
default = ["extension-module"]

Added

  • Added a wrap_pymodule! macro similar to the existing wrap_pyfunction! macro. Only available on python 3
  • Added support for cross compiling (e.g. to arm v7) by mtp401 in #327. See the "Cross Compiling" section in the "Building and Distribution" chapter of the guide for more details.
  • The PyRef and PyRefMut types, which allow to differentiate between an instance of a rust struct on the rust heap and an instance that is embedded inside a python object. By kngwyu in #335
  • Added FromPy<T> and IntoPy<T> which are equivalent to From<T> and Into<T> except that they require a gil token.
  • Added ManagedPyRef, which should eventually replace ToBorrowedObject.

Changed

  • Renamed PyObjectRef to PyAny in #388
  • Renamed add_function to add_wrapped as it now also supports modules.
  • Renamed #[pymodinit] to #[pymodule]
  • py.init(|| value) becomes Py::new(value)
  • py.init_ref(|| value) becomes PyRef::new(value)
  • py.init_mut(|| value) becomes PyRefMut::new(value).
  • PyRawObject::init is now infallible, e.g. it returns () instead of PyResult<()>.
  • Renamed py_exception! to create_exception! and refactored the error macros.
  • Renamed wrap_function! to wrap_pyfunction!
  • Renamed #[prop(get, set)] to #[pyo3(get, set)]
  • #[pyfunction] now supports the same arguments as #[pyfn()]
  • Some macros now emit proper spanned errors instead of panics.
  • Migrated to the 2018 edition
  • crate::types::exceptions moved to crate::exceptions
  • Replace IntoPyTuple with IntoPy<Py<PyTuple>>.
  • IntoPyPointer and ToPyPointer moved into the crate root.
  • class::CompareOp moved into class::basic::CompareOp
  • PyTypeObject is now a direct subtrait PyTypeCreate, removing the old cyclical implementation in #350
  • Add PyList::{sort, reverse} by chr1sj0nes in #357 and #358
  • Renamed the typeob module to type_object

Removed

  • PyToken was removed due to unsoundness (See #94).
  • Removed the unnecessary type parameter from PyObjectAlloc
  • NoArgs. Just use an empty tuple
  • PyObjectWithGIL. PyNativeType is sufficient now that PyToken is removed.

Fixed

  • A soudness hole where every instances of a #[pyclass] struct was considered to be part of a python object, even though you can create instances that are not part of the python heap. This was fixed through PyRef and PyRefMut.
  • Fix kwargs support in #328.
  • Add full support for __dict__ in #403.

0.5.3 - 2019-01-04

Fixed

  • Fix memory leak in ArrayList by kngwyu #316

0.5.2 - 2018-11-25

Fixed

  • Fix undeterministic segfaults when creating many objects by kngwyu in #281

[0.5.1] - 2018-11-24

Yanked

0.5.0 - 2018-11-11

Added

  • #[pyclass] objects can now be returned from rust functions
  • PyComplex by kngwyu in #226
  • PyDict::from_sequence(), equivalent to dict([(key, val), ...])
  • Bindings for the datetime standard library types: PyDate, PyTime, PyDateTime, PyTzInfo, PyDelta with associated ffi types, by pganssle #200.
  • PyString, PyUnicode, and PyBytes now have an as_bytes() method that returns &[u8].
  • PyObjectProtocol::get_type_ptr() by ijl in #242

Changed

  • Removes the types from the root module and the prelude. They now live in pyo3::types instead.
  • All exceptions are consturcted with py_err instead of new, as they return PyErr and not Self.
  • as_mut and friends take and &mut self instead of &self
  • ObjectProtocol::call now takes an Option<&PyDict> for the kwargs instead of an IntoPyDictPointer.
  • IntoPyDictPointer was replace by IntoPyDict which doesn't convert PyDict itself anymore and returns a PyDict instead of *mut PyObject.
  • PyTuple::new now takes an IntoIterator instead of a slice
  • Updated to syn 0.15
  • Splitted PyTypeObject into PyTypeObject without the create method and PyTypeCreate with requires PyObjectAlloc<Self> + PyTypeInfo + Sized.
  • Ran cargo edition --fix which prefixed path with crate:: for rust 2018
  • Renamed async to pyasync as async will be a keyword in the 2018 edition.
  • Starting to use NonNull<*mut PyObject> for Py and PyObject by ijl #260

Removed

  • Removed most entries from the prelude. The new prelude is small and clear.
  • Slowly removing specialization uses
  • PyString, PyUnicode, and PyBytes no longer have a data() method (replaced by as_bytes()) and PyStringData has been removed.
  • The pyobject_extract macro

Fixed

  • Added an explanation that the GIL can temporarily be released even while holding a GILGuard.
  • Lots of clippy errors
  • Fix segfault on calling an unknown method on a PyObject
  • Work around a bug in the rust compiler by kngwyu #252
  • Fixed a segfault with subclassing pyo3 create classes and using __class__ by kngwyu #263

0.4.1 - 2018-08-20

Changed

  • PyTryFrom's error is always to PyDowncastError

Fixed

  • Fixed compilation on nightly since use_extern_macros was stabilized

Removed

  • The pyobject_downcast macro

0.4.0 - 2018-07-30

Changed

  • Merged both examples into one
  • Rustfmt all the things :heavy_check_mark:
  • Switched to Keep a Changelog

Removed

0.3.2 - 2018-07-22

Changed

  • Replaced concat_idents with mashup

0.3.1 - 2018-07-18

Fixed

  • Fixed scoping bug in pyobject_native_type that would break rust-numpy

0.3.0 - 2018-07-18

Added

  • A few internal macros became part of the public api (#155, #186)
  • Always clone in getters. This allows using the get-annotation on all Clone-Types

Changed

  • Upgraded to syn 0.14 which means much better error messages :tada:
  • 128 bit integer support by kngwyu (#137)
  • proc_macro has been stabilized on nightly (rust-lang/rust#52081). This means that we can remove the proc_macro feature, but now we need the use_extern_macros from the 2018 edition instead.
  • All proc macro are now prefixed with py and live in the prelude. This means you can use #[pyclass], #[pymethods], #[pyproto], #[pyfunction] and #[pymodinit] directly, at least after a use pyo3::prelude::*. They were also moved into a module called proc_macro. You shouldn't use #[pyo3::proc_macro::pyclass] or other longer paths in attributes because proc_macro_path_invoc isn't going to be stabilized soon.
  • Renamed the base option in the pyclass macro to extends.
  • #[pymodinit] uses the function name as module name, unless the name is overrriden with #[pymodinit(name)]
  • The guide is now properly versioned.

0.2.7 - 2018-05-18

Fixed

  • Fix nightly breakage with proc_macro_path

0.2.6 - 2018-04-03

Fixed

  • Fix compatibility with TryFrom trait #137

0.2.5 - 2018-02-21

Added

  • CPython 3.7 support

Fixed

  • Embedded CPython 3.7b1 crashes on initialization #110
  • Generated extension functions are weakly typed #108
  • call_method*() crashes when the method does not exist #113
  • Allow importing exceptions from nested modules #116

0.2.4 - 2018-01-19

Added

  • Allow to get mutable ref from PyObject #106
  • Drop RefFromPyObject trait
  • Add Python::register_any() method

Fixed

  • Fix impl FromPyObject for Py<T>
  • Mark method that work with raw pointers as unsafe #95

0.2.3 - 11-27-2017

Changed

  • Rustup to 1.23.0-nightly 2017-11-07

Fixed

  • Proper c_char usage #93

Removed

  • Remove use of now unneeded 'AsciiExt' trait

0.2.2 - 09-26-2017

Changed

  • Rustup to 1.22.0-nightly 2017-09-30

0.2.1 - 09-26-2017

Fixed

  • Fix rustc const_fn nightly breakage

0.2.0 - 08-12-2017

Added

  • Added inheritance support #15
  • Added weakref support #56
  • Added subclass support #64
  • Added self.__dict__ supoort #68
  • Added pyo3::prelude module #70
  • Better Iterator support for PyTuple, PyList, PyDict #75
  • Introduce IntoPyDictPointer similar to IntoPyTuple #69

Changed

  • Allow to add gc support without implementing PyGCProtocol #57
  • Refactor PyErr implementation. Drop py parameter from constructor.

0.1.0 - 07-23-2017

Added

  • Initial release