base on Deep Learning for humans # Keras 3: Deep Learning for Humans Keras 3 is a multi-backend deep learning framework, with support for JAX, TensorFlow, PyTorch, and OpenVINO (for inference-only). Effortlessly build and train models for computer vision, natural language processing, audio processing, timeseries forecasting, recommender systems, etc. - **Accelerated model development**: Ship deep learning solutions faster thanks to the high-level UX of Keras and the availability of easy-to-debug runtimes like PyTorch or JAX eager execution. - **State-of-the-art performance**: By picking the backend that is the fastest for your model architecture (often JAX!), leverage speedups ranging from 20% to 350% compared to other frameworks. [Benchmark here](https://keras.io/getting_started/benchmarks/). - **Datacenter-scale training**: Scale confidently from your laptop to large clusters of GPUs or TPUs. Join nearly three million developers, from burgeoning startups to global enterprises, in harnessing the power of Keras 3. ## Installation ### Install with pip Keras 3 is available on PyPI as `keras`. Note that Keras 2 remains available as the `tf-keras` package. 1. Install `keras`: ``` pip install keras --upgrade ``` 2. Install backend package(s). To use `keras`, you should also install the backend of choice: `tensorflow`, `jax`, or `torch`. Additionally, The `openvino` backend is available with support for model inference only. ### Local installation #### Minimal installation Keras 3 is compatible with Linux and macOS systems. For Windows users, we recommend using WSL2 to run Keras. To install a local development version: 1. Install dependencies: ``` pip install -r requirements.txt ``` 2. Run installation command from the root directory. ``` python pip_build.py --install ``` 3. Run API generation script when creating PRs that update `keras_export` public APIs: ``` ./shell/api_gen.sh ``` ## Backend Compatibility Table The following table lists the minimum supported versions of each backend for the latest stable release of Keras (v3.x): | Backend | Minimum Supported Version | |------------|---------------------------| | TensorFlow | 2.16.1 | | JAX | 0.4.20 | | PyTorch | 2.1.0 | | OpenVINO | 2025.3.0 | #### Adding GPU support The `requirements.txt` file will install a CPU-only version of TensorFlow, JAX, and PyTorch. For GPU support, we also provide a separate `requirements-{backend}-cuda.txt` for TensorFlow, JAX, and PyTorch. These install all CUDA dependencies via `pip` and expect a NVIDIA driver to be pre-installed. We recommend a clean Python environment for each backend to avoid CUDA version mismatches. As an example, here is how to create a JAX GPU environment with `conda`: ```shell conda create -y -n keras-jax python=3.10 conda activate keras-jax pip install -r requirements-jax-cuda.txt python pip_build.py --install ``` ## Configuring your backend You can export the environment variable `KERAS_BACKEND` or you can edit your local config file at `~/.keras/keras.json` to configure your backend. Available backend options are: `"tensorflow"`, `"jax"`, `"torch"`, `"openvino"`. Example: ``` export KERAS_BACKEND="jax" ``` In Colab, you can do: ```python import os os.environ["KERAS_BACKEND"] = "jax" import keras ``` **Note:** The backend must be configured before importing `keras`, and the backend cannot be changed after the package has been imported. **Note:** The OpenVINO backend is an inference-only backend, meaning it is designed only for running model predictions using `model.predict()` method. ## Backwards compatibility Keras 3 is intended to work as a drop-in replacement for `tf.keras` (when using the TensorFlow backend). Just take your existing `tf.keras` code, make sure that your calls to `model.save()` are using the up-to-date `.keras` format, and you're done. If your `tf.keras` model does not include custom components, you can start running it on top of JAX or PyTorch immediately. If it does include custom components (e.g. custom layers or a custom `train_step()`), it is usually possible to convert it to a backend-agnostic implementation in just a few minutes. In addition, Keras models can consume datasets in any format, regardless of the backend you're using: you can train your models with your existing `tf.data.Dataset` pipelines or PyTorch `DataLoaders`. ## Why use Keras 3? - Run your high-level Keras workflows on top of any framework -- benefiting at will from the advantages of each framework, e.g. the scalability and performance of JAX or the production ecosystem options of TensorFlow. - Write custom components (e.g. layers, models, metrics) that you can use in low-level workflows in any framework. - You can take a Keras model and train it in a training loop written from scratch in native TF, JAX, or PyTorch. - You can take a Keras model and use it as part of a PyTorch-native `Module` or as part of a JAX-native model function. - Make your ML code future-proof by avoiding framework lock-in. - As a PyTorch user: get access to power and usability of Keras, at last! - As a JAX user: get access to a fully-featured, battle-tested, well-documented modeling and training library. Read more in the [Keras 3 release announcement](https://keras.io/keras_3/). ", Assign "at most 3 tags" to the expected json: {"id":"394","tags":[]} "only from the tags list I provide: [{"id":77,"name":"3d"},{"id":89,"name":"agent"},{"id":17,"name":"ai"},{"id":54,"name":"algorithm"},{"id":24,"name":"api"},{"id":44,"name":"authentication"},{"id":3,"name":"aws"},{"id":27,"name":"backend"},{"id":60,"name":"benchmark"},{"id":72,"name":"best-practices"},{"id":39,"name":"bitcoin"},{"id":37,"name":"blockchain"},{"id":1,"name":"blog"},{"id":45,"name":"bundler"},{"id":58,"name":"cache"},{"id":21,"name":"chat"},{"id":49,"name":"cicd"},{"id":4,"name":"cli"},{"id":64,"name":"cloud-native"},{"id":48,"name":"cms"},{"id":61,"name":"compiler"},{"id":68,"name":"containerization"},{"id":92,"name":"crm"},{"id":34,"name":"data"},{"id":47,"name":"database"},{"id":8,"name":"declarative-gui "},{"id":9,"name":"deploy-tool"},{"id":53,"name":"desktop-app"},{"id":6,"name":"dev-exp-lib"},{"id":59,"name":"dev-tool"},{"id":13,"name":"ecommerce"},{"id":26,"name":"editor"},{"id":66,"name":"emulator"},{"id":62,"name":"filesystem"},{"id":80,"name":"finance"},{"id":15,"name":"firmware"},{"id":73,"name":"for-fun"},{"id":2,"name":"framework"},{"id":11,"name":"frontend"},{"id":22,"name":"game"},{"id":81,"name":"game-engine "},{"id":23,"name":"graphql"},{"id":84,"name":"gui"},{"id":91,"name":"http"},{"id":5,"name":"http-client"},{"id":51,"name":"iac"},{"id":30,"name":"ide"},{"id":78,"name":"iot"},{"id":40,"name":"json"},{"id":83,"name":"julian"},{"id":38,"name":"k8s"},{"id":31,"name":"language"},{"id":10,"name":"learning-resource"},{"id":33,"name":"lib"},{"id":41,"name":"linter"},{"id":28,"name":"lms"},{"id":16,"name":"logging"},{"id":76,"name":"low-code"},{"id":90,"name":"message-queue"},{"id":42,"name":"mobile-app"},{"id":18,"name":"monitoring"},{"id":36,"name":"networking"},{"id":7,"name":"node-version"},{"id":55,"name":"nosql"},{"id":57,"name":"observability"},{"id":46,"name":"orm"},{"id":52,"name":"os"},{"id":14,"name":"parser"},{"id":74,"name":"react"},{"id":82,"name":"real-time"},{"id":56,"name":"robot"},{"id":65,"name":"runtime"},{"id":32,"name":"sdk"},{"id":71,"name":"search"},{"id":63,"name":"secrets"},{"id":25,"name":"security"},{"id":85,"name":"server"},{"id":86,"name":"serverless"},{"id":70,"name":"storage"},{"id":75,"name":"system-design"},{"id":79,"name":"terminal"},{"id":29,"name":"testing"},{"id":12,"name":"ui"},{"id":50,"name":"ux"},{"id":88,"name":"video"},{"id":20,"name":"web-app"},{"id":35,"name":"web-server"},{"id":43,"name":"webassembly"},{"id":69,"name":"workflow"},{"id":87,"name":"yaml"}]" returns me the "expected json"