Tensorflow rust bindings with loads of easy to use documentation

bindings for documentation can be found in github or on our docs page

TensorFlow Rust provides idiomatic Rust language bindings for TensorFlow.

Notice: This project is still under active development and not guaranteed to have a stable API.

  • Documentation
  • TensorFlow Rust Google Group
  • TensorFlow website
  • TensorFlow GitHub page

Getting Started

Since this crate depends on the TensorFlow C API, it needs to be downloaded or compiled first. This crate will automatically download or compile the TensorFlow shared libraries for you, but it is also possible to manually install TensorFlow and the crate will pick it up accordingly.

Prerequisites

If the TensorFlow shared libraries can already be found on your system, they will be used. If your system is x86-64 Linux or Mac, a prebuilt binary will be downloaded, and no special prerequisites are needed.

Otherwise, the following dependencies are needed to compile and build this crate, which involves compiling TensorFlow itself:

  • git
  • bazel
  • Python Dependencies numpy, dev, pip and wheel
  • Optionally, CUDA packages to support GPU-based processing

The TensorFlow website provides detailed instructions on how to obtain and install said dependencies, so if you are unsure please check out the docs for further details.

Some of the examples use TensorFlow code written in Python and require a full TensorFlow installation.

The minimum supported Rust version is 1.49.

Usage

Add this to your Cargo.toml:

[dependencies]
tensorflow = "0.17.0"

and this to your crate root:

extern crate tensorflow;

Then run cargo build -j 1. The tensorflow-sys crate's build.rs now either downloads a pre-built, basic CPU only binary (the default) or compiles TensorFlow if forced to by an environment variable. If TensorFlow is compiled during this process, since the full compilation is very memory intensive, we recommend using the -j 1 flag which tells cargo to use only one task, which in turn tells TensorFlow to build with only one task. Though, if you have a lot of RAM, you can obviously use a higher value.

To include the especially unstable API (which is currently the expr module), use --features tensorflow_unstable.

For now, please see the Examples for more details on how to use this binding.

GPU Support

To enable GPU support, use the tensorflow_gpu feature in your Cargo.toml:

[dependencies]
tensorflow = { version = "0.17.0", features = ["tensorflow_gpu"] }

Manual TensorFlow Compilation

If you want to work against unreleased/unsupported TensorFlow versions or use a build optimized for your machine, manual compilation is the way to go.

See tensorflow-sys/README.md for details.

FAQ's

Why does the compiler say that parts of the API don't exist?

The especially unstable parts of the API (which is currently the expr module) are feature-gated behind the feature tensorflow_unstable to prevent accidental use. See http://doc.crates.io/manifest.html#the-features-section. (We would prefer using an #[unstable] attribute, but that doesn't exist yet.)

How do I...?

Try the documentation first, and see if it answers your question. If not, take a look at the examples folder. Note that there may not be an example for your exact question, but it may be answered by an example demonstrating something else.

If none of the above help, you can ask your question on TensorFlow Rust Google Group.

Contributing

Developers and users are welcome to join the TensorFlow Rust Google Group.

Please read the contribution guidelines on how to contribute code.

This is not an official Google product.

RFCs are issues tagged with RFC. Check them out and comment. Discussions are welcomed. After all, that is the purpose of Request For Comment!

License

This project is licensed under the terms of the Apache 2.0 license.

Issues

Collection of the latest Issues

Corallus-Caninus

Corallus-Caninus

bug
2

tensor_array_concat_v3 doesnt build dtype attribute.

I get: thread 'layers::test_concat_split' panicked at 'called Result::unwrap() on an Err value: {inner:0x2392837d900, InvalidArgument: NodeDef missing attr 'dtype' from Op<name=TensorArrayConcatV3; signature=handle:resource, flow_in:float -> value:dtype, lengths:int64; attr=dtype:type; attr=element_shape_except0:shape,default=; is_stateful=true>; NodeDef: {{node TensorArrayConcatV3}}}', src\layers.rs:207:58

for: let group_op = ops::tensor_array_concat_v3(handle_output.clone(), flow_output.clone(), scope)?; where handle_output and flow_output are the corresponding outputs from a tensorarrayv3

Corallus-Caninus

Corallus-Caninus

enhancement
3

I will be referencing the following documentation: https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/tensor-array https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/tensor-array-write

as well as the following source code: src/graph.rs @ line 1195 function get_attr_metadata and the following get_attr_* src/ops/ops_impl @ line 123054 struct TensorArrayWriteV3 and the implementations

TensorArray is expected to return handle and flow according to the C++ API

These Output types are required for tensor_array_write (tensor_array_write_v3 in my case).

However, going through the source in graph.rs I can only read attributes as native type to recover these outputs, e.g.: &str and f32. TensorArrayV3,s constructor only returns one Output type.

tensor_array_write_v3() has all its generics set as into which is correct, but I cannot retrieve these outputs from the TensorArrayV3::build() constructor which only returns one Output type.

I have tried passing this Output type into tensor_array_write_v3 by cloning my TensorArray object for handle and flow parameters, hoping Tensorflow will associate the handle and flow attributes as Output types but this throws an internal error that is

for posterity, this is my function being developed and tested:

Corallus-Caninus

Corallus-Caninus

bug
4

I cannot construct a split operation. when given the appropriate types in ops::split() the attribute num_split does not get assigned. I'd reference the line but the file is too large for github, in my IDE it is 111704 in src/ops/ops_impl

there seems to be a discrepancy between num_split and split_dim as well.

It also doesnt seem like a great idea to have if statements for build_impl codegen where those attributes are mandatory without a default value for the node but I am still learning the codebase.

If anyone sees this please let me know how I can help otherwise I'll wing it.

Corallus-Caninus

Corallus-Caninus

2

I am trying to backprop/minimize my network without the input vector since I already have output and label vectors

This is the relevant code I'm trying to implement:

where Output_op and Label are my output and label operations respectively and output and labels are my output and label tensors. self.minimize is either GradientDescent optimizer or Adadelta optimizer. Error operation is defined as a function of output and label exclusively. The network is very similar to the xor example in this repository and is from my NormNet repo (its very messy and initial so beware).

Based on my understanding of backprop this should be possible. Is this feature missing or did I make a mistake? Please let me know how I can clarify this further.

stdout log from runtime:

thread 'tests::test_evaluate' panicked at 'called Result::unwrap() on an Err value: {inner:0x2147c7c9480, InvalidArgument: You must feed a value for placeholder tensor 'input' with dtype float and shape [1,2] [[{{node input}}]]}', src\lib.rs:1197:94 note: run with RUST_BACKTRACE=1 environment variable to display a backtrace test tests::test_evaluate ... FAILED

SamKG

SamKG

bug
4

The Merge operation (https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/merge) should accept a list of inputs, and output the first received tensor. However, the rust binding (https://tensorflow.github.io/rust/tensorflow/ops/fn.merge.html) currently accepts only a single input (despite the input variable being named "inputs"). Digging through the source code, I found that Merge.build_impl internally also calls nd.add_input instead of nd.add_inputs (https://tensorflow.github.io/rust/src/tensorflow/ops/ops_impl.rs.html#57909)

Is there a way to pass multiple inputs using this api?

dskkato

dskkato

0

Issue

Build from source in macOS (Intel CPU) fails due to name mismatch. The panic from tensorflow-sys crate occurred just after the following log message (see full log message in detail):

There seem to be several issues here when I checked the build directory.

  • src may be libtensorflow_framework.2.dylib, not libtensorflow_framework.dylib.2
  • also, target may be libtensorflow_framework.2.dylib, not libtensorflow_framework.so.2

Build log

build log except for that from bazel

Build environment:

zzeee

zzeee

13

ERROR: The project you're trying to build requires Bazel 3.7.2 (specified in /Users/andrey/.cargo/registry/src/github.com-1ecc6299db9ec823/tensorflow-sys-0.20.0/target/source-v2.5.0/.bazelversion), but it wasn't found in /opt/homebrew/Cellar/bazel/4.2.2/libexec/bin.

Needs bazel 3.7.2. Which is not available for M1 Macs.

Versions starting from 4.1 are available.

brianjjones

brianjjones

bug
2

The Cargo.toml has the correct dependencies, however crates.io appears to have not been updated. If I use tensorflow-internal-macros = "0.0.1" in my project, it imports proc-macro2 v0.4.30, quote v0.6.13, syn v0.15.44, and unicode-xid v0.1.0.

This doesn't prevent me from building, but I have a project I want to create a PR for that requires there not be any duplicate crates in the lockfile. Can you please update crates.io so that this is no longer an issue?

dskkato

dskkato

0

When I run tests in the master branch, the following two tests will always panic when executed with the tensorflow_gpu feature. I'll look into the cause when I have time.

My environment is as follows:

  • branch: master
    • commit: de27f4edf55e877f6239de132ea7270129a9eb31
  • command: > cargo clean && cargo test --features tensorflow_gpu
  • Windows 10 Pro
    • CUDA: v11.2
    • cudnn: v8.1.0
  • rustc version is rustc 1.56.0 (09c42c458 2021-10-18)
full log
tpmccallum

tpmccallum

enhancement
0

Whilst "running" a model can be done via TensorFlow APIs, there are a series of steps involved in the post-processing phase. For example the outputs from YOLOv4 are just byte arrays and therefore require quite a lot of processing in order to create useful information which can be used to create an output image with labels, bounding boxes and so forth.

Here is an example of where we can run TensorFlow Lite using YOLOv4 Model [1]. As you can see from the end of this Medium article [2] the outputs are Identity and Identity_1 which contain data like the following

Identity

Identity_1

I can see places where custom code is written to parse the tensor i.e. this Microsoft .NET / C# code [3].

As far as I am aware, there is nothing like this written in Rust yet. Is this something we could create in this repo as a feature add?

Thanks so much Tim

[1] https://github.com/second-state/wasm-learning/tree/master/faas/yolo-tflite#option-1 [2] https://medium.com/wasm/ai-on-a-cloud-native-webassembly-runtime-wasmedge-part-i-3bf3714a64ea [3] https://docs.microsoft.com/en-us/dotnet/machine-learning/tutorials/object-detection-onnx#create-the-parser

dskkato

dskkato

cleanup
1

In a version 0.17.0 release, the back end of this crate became TF 2.5, while following example Python codes stay for TF 1.x series.

  • addition - resolved by #307
  • regression
  • regression_checkpoint
  • regression_savedmodel - resolved by #312

In addition to those examples, I also would like to add some Keras examples like #286 since its signature usage was helpful for me.

Would you mind if I try these migrations? I already confirmed basic TF2.5 usages posted in my repository.

cheungchingli

cheungchingli

bug
1

ERROR: /root/tensorflow/third_party/llvm/BUILD:1:10: //third_party/llvm:expand_cmake_vars: This target is being built for Python 2 but (transitively) includes Python 3-only sources. You can get diagnostic information about which dependencies introduce this version requirement by running the find_requirements aspect. If this is used in a genrule, you may need to migrate from tools to exec_tools. For more info see the documentation for the srcs_version attribute: https://docs.bazel.build/versions/master/be/python.html#py_binary.srcs_version caused by //third_party/llvm:expand_cmake_vars

abrown

abrown

enhancement
0

Is there any interest to add a way to do runtime linking (e.g., using libloading) to installed TensorFlow shared libraries?

adamcrume

adamcrume

enhancement
4

This will require adding a new enum which looks like the generated protos::AttrValue_oneof_value (in protos/attr_value.rs), and default_value will need to be optional.

allentv

allentv

enhancement
2

Hi. I am a beginner in Rust and was looking for issues that I can start working on. Unfortunately there was only 1 atm which seems to have been worked on.

I was wondering if there is a GDoc or a wiki page that I can refer to understand how the project is structured, some of the high level design decisions and a list of what are good to haves for the crate.

ammaraskar

ammaraskar

enhancement
1

Hi there, we (Rust group @sslab-gatech) are scanning crates on crates.io for potential soundness bugs. We noticed that the TensorType trait is commented as:

Currently, all implementors must not implement Drop (or transitively contain anything that does) and must be bit-for-bit compatible with the corresponding C type. Clients must not implement this trait.

Since implementing this trait improperly potentially allows for dangerous behavior and is commented as something that shouldn't be implemented, would it make sense to seal this trait as per https://rust-lang.github.io/api-guidelines/future-proofing.html#sealed-traits-protect-against-downstream-implementations-c-sealed or to mark it an unsafe trait?

justnoxx

justnoxx

question
5

Hello!

After 0.16 version has been released we decided to switch to TF2.

The issue is that it can't find the op by name even if this OP is present in the graph (checked with tensorboard and dumped model).

Could someone please help me or show me where I am wrong? Or it is better for now go back to TF1?

My system:

  1. macOS 10.15.7
  2. cargo 1.45.0
  3. tensorflow: 2.3.0
  4. tensorflow-rust: 1.16.1

To reproduce it, please, use the examples.

This code produces the simple Sequential model using Keras:

This rust code tries to load this model and calculate something (please, note that I also tried test_in as input name, the result is the same):

But it returns the following error:

Thanks.

lucamc9

lucamc9

2

Hi, is there a way to save the SavedModel after training having loaded the graph from a SavedModel (as opposed to initialising the layers & vars like in examples/xor.rs)?

It seems like the SavedModelBuilder requires a collection of variables & scope which doesn't seem straightforward to get just from an already existing graph.

Information - Updated May 04, 2022

Stars: 3.6K
Forks: 320
Issues: 48

Repositories & Extras

Linfa unlocks verified machine learning algorithms in Rust

Supports ML & data processing algorithms such as logistic regression, linear regression, vector machines, normalization & vectorization

Linfa unlocks verified machine learning algorithms in Rust

HAL : Hyper Adaptive Learning

Rust based Cross-GPU Machine Learning

HAL : Hyper Adaptive Learning

Rustml is a library for doing machine learning in Rust

The documentation of the project with a descprition of the modules can be found

Rustml is a library for doing machine learning in Rust

A machine learning library for Rust

To use autograph in your crate, add it as a dependency in Cargo

A machine learning library for Rust

A CHIP-8 virtual machine written in Rust

Install Rust using the How to write an emulator (CHIP-8 interpreter)

A CHIP-8 virtual machine written in Rust

The Rust Machine Learning Book

This repository contains the source of &quot;&quot;

The Rust Machine Learning Book

My first attempt at machine learning in rust

This library currently only offers very basic KNN

My first attempt at machine learning in rust

rust-machine-learning-api-example

Example of Rust API for Machine Learning

rust-machine-learning-api-example

MachineID for Rust - Like

This Rust package is inspired by

MachineID for Rust - Like

R2VM is the Rust for RISC-V Virtual Machine

R2VM is a full-system, multi-core, cycle-level simulator, with binary translation to provide high performance

R2VM is the Rust for RISC-V Virtual Machine
Facebook Instagram Twitter GitHub Dribbble
Privacy