Tensorflow Losses Mse

Loss Function in TensorFlow. This scenario shows how to use TensorFlow to the classification task. TensorFlow Lite for mobile and embedded devices For Production TensorFlow Extended for end-to-end ML components. will always be convex. In this case, because our metric is a measure of error, we want to see it go down as well. My implementation in keras is working but not in tensorflow. Time series analysis has. For more examples, please see examples. 1, min_lr = 1e-5) Q & A About Correctness. loss_mean_squared_error()). When defining a model using one of tf. In order to train the model, the TensorFlow loops through the data and it should find the optimal line (as we have a linear model) that fits the data. Introduction. sigmoid_cross_entropy_with_logits(predictions, labels) # Regularization term, take the L2 loss of each of the weight tensors, # in this example,. TensorFlow provides tools to have full control of the computations. Documentation for the TensorFlow for R interface. Tensorflow can use the SYCL interface to seamlessly run device agnostic c++ code on an OpenCL enabled device. The metric applied is the loss. It can be the string identifier of an existing loss function (e. We will illustrate how to create a one hidden layer NN. Then extract variables that are trainable. mse to all probes in data if no explicit objective is given (mirroring the default behaviour in sim. TensorFlow 2. These are created by the callbacks we created earlier. Gradient Descent update rule. 따라서, 이 논문에서는 pixel-wise losses(MSE) 대신에 perceptual similarity를 주목한 loss function을 제안하였다. To disable them, build with -define=tensorflow_mkldnn_contraction_kernel=0. A list of available losses and metrics are available in Keras' documentation. On top of that, TensorFlow is equipped with a vast array of APIs to perform many machine learning algorithms. Custom Loss Functions. Below are few simple examples of the API to get you started with TensorFlow Learn. Adam() You will now define the metrics which will be used to measure the performance of the network turning its training. all variables, operations, collections, etc. squared_difference(). dynamic computation graphs I Creating a static graph beforehand is unnecessary I Reverse-mode auto-diff implies a computation graph. Predicting Sunspot Frequency with Keras. Linear regression predicts a value while the linear classifier predicts a class. ) Deciding which loss function to use If the outliers represent anomalies that are important for business and should be detected, then we should use MSE. On top of that, TensorFlow is equipped with a vast array of APIs to perform many machine learning algorithms. metrics , such as precision, recall, accuracy, auc, MSE, as well as their streaming versions. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. About the Author Harveen Singh Chadha is an experienced researcher in Deep Learning and is currently working as a Self-Driving Car Engineer. We can actually pass any TensorFlow ops as fetches in tf. You can use callbacks to get a view on internal states and statistics of the model during training. To improve the knowledge of the network, some optimization is required by adjusting the weights of the net. In this chapter, we introduce example for Linear Regression and as before we will start with data preparation stage. Consider holding on to the return value or collecting losses via a tf. $\begingroup$ Usually, MSE is taken for regression and Cross-Entropy for classification. In this post we will train an autoencoder to detect credit card fraud. 1 深度学习与神经网络深度学习可以说成是深层神经网络的代名词,但是深度学习更加强调——多层和非线性。主要是因为神经网络的线性模型有一定局限性。. Cross-entropy loss increases as the predicted probability diverges from the actual label. Tensorflow JS provide useful APIs for doing so, but tf. PyTorch I Biggest difference: Static vs. When training a model we want to see the loss go down. from __future__ import absolute_import, division, print_function import tensorflow as tf from tensorflow import keras import numpy as np print(tf. 我是机器学习的新手,我正在使用Keras和TensorFlow后端来训练CNN模型. It's useful to re-scale dataset before passing to estimator to 0 mean and unit standard deviation. Now I hope you understand the first two lines of code. 帮忙看看tensorflow框架下损失函数写的对吗。 [问题点数:40分]. Tensorflow code now produces 2 different pip packages: tensorflow_core containing all the code (in the future it will contain only the private implementation) and tensorflow which is a virtual pip package doing forwarding to tensorflow_core (and in the future will contain only the public API of tensorflow). dynamic computation graphs I Creating a static graph beforehand is unnecessary I Reverse-mode auto-diff implies a computation graph. The Spaun benchmark network will now be installed through pip rather than manually cloning and importing the repo. In today's blog post we are going to learn how to utilize: Multiple loss functions; Multiple outputs …using the Keras deep learning library. Instead, it uses another library to do. The metric applied is the loss. Pairwise loss. > Intro to TensorFlow with Linear Regression Intro to Tensorflow We've looked at two different ways to run linear regression with gradient descent in Python : implementing the algorithm from scratch and using scikit-learn. layers imp. Time series analysis has. The next example shows how to use layers package for MNIST training. They are extracted from open source Python projects. The higher the loss function, the dumber the model is. Sigmoid cross-entropy loss vs. An autoencoder is a great tool to recreate an input. The following are code examples for showing how to use tensorflow. We define it in TensorFlow in the following way:. This class implements a Multi-Layer Perceptron to be used for regression problems. 正则项一般会添加在loss上来避免过拟合问题。下面看代码吧 这是给weights['h1']和weights['h2']添加了l2正则。也有人会这样写 更有人会:. This architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code. SUPER-RESOLUTION SRCNN TensorFlow Tutorial: Part 1 This is the first entry into a four-part series that will give a tutorial on th Super-Resolution SRCNN Tutorial in TensorFlow Part 1 Generating Pokemon with a Generative Adversarial Network GAN in Tensorflow 1. Contribute to Tianxiaomo/tensorflow_notebook development by creating an account on GitHub. Formulating classification tasks in TensorFlow. We optimize both w and b by obtaining their partial derivatives with respect to the loss function ( MSE ). MSE is the straight line between two points in Euclidian space, in neural network, we apply back propagation algorithm to iteratively minimize the MSE, so the network can learning from your data, next time when the network see the similar data, the inference result should be similar to the output of the training output. About the Author Harveen Singh Chadha is an experienced researcher in Deep Learning and is currently working as a Self-Driving Car Engineer. Calculate the semantic segmentation using weak softmax cross entropy loss. Keras is an API used for running high-level neural networks. layers imp. Allow non-Tensors through v2 losses. Operations return values, not tensors. "categorical_crossentropy" or "mse") or a call to a loss function (e. Also, in the meantime, you will be able to elaborate on your own systems. To improve the knowledge of the network, some optimization is required by adjusting the weights of the net. loss returns the MSE by default. TensorFlow makes a number of standard loss functions available in tf. I have trained the following model in Keras: from keras. Machinelearningmastery. The optimizer produces similar losses and weights to the official optimizer after 500 steps. In this post we will examine making time series predictions using the sunspots dataset that ships with base R. The following are code examples for showing how to use tensorflow. The loss_collection argument is ignored when executing eagerly. TECHNICAL PAPERS SOLID-STATE SCIENCE AND TECHNOLOGY Colloidal Processing of BaCeO3-Based Electrolyte Films Vishal Agarwal and Meilin Liu** School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0245, USA. Optimize MSE and find right thresholds. Basically, I don’t get why my model doesn’t train past 0. float32 n = 3000 # number of samples x_size = 4 # size of single x def make. " Feb 13, 2018. This guest post by Giancarlo Zaccone, the author of Deep Learning with TensorFlow, shows how to run linear regression on a real-world dataset using TensorFlow. clone_metrics(metrics) Clones the given metric list/dict. It also includes a use-case in which we will create a model that will differentiate between a rock and a mine using TensorFlow. 0 will be simple and easy to use for all users on all platforms. In Machine Learning it makes sense to plot your loss or accuracy for both your training and validation set over time. keras or tf-2. Rakhil has 4 jobs listed on their profile. In this workshop, you will become familiar with the building blocks of Tensorflow models, such as tensors, placeholders, variables, graphs, and sessions. It may not be exactly what you need however it should give you enough to get started. backward # Manually update weights using gradient descent. mse to all probes in data if no explicit objective is given (mirroring the default behaviour in sim. I have trained the following model in Keras: from keras. On the other hand, using mean squared errors as loss function, would produce a decent result, and I am now able to reconstruct the inputs. 00805340310791 Mean Percent Error: 0. FittedLoss is 'mse', then the loss term in the objective function is half of the MSE. LSTM regression using TensorFlow. This is the Market's Far West. - MSE takes care of -5 and 5 ; $(-5)^2$ = $5^2$ = 25. Loss functions in TensorFlow In this exercise, you will compute the loss using data from the King County housing dataset. These are created by the callbacks we created earlier. This is a practical exercise to learn how to make predictions with TensorFlow. Regression Neural Networks with Keras. TensorFlow calls them estimators Low-level API: Build the. Well, I tried using cross entropy as loss function, but the output was always a blob, and I noticed that the weights from X to e1 would always converge to an zero-valued matrix. We could have used mse (mean squared error), but we used categorical_crossentropy. CategoricalCrossentropy() function, where the P values are one-hot encoded. TensorFlow without Keras from keras_radam. If there is something that data scientists like to do, is merge concepts and create new beautiful models. To improve the knowledge of the network, some optimization is required by adjusting the weights of the net. Adam() You will now define the metrics which will be used to measure the performance of the network turning its training. in conjunction with the MSE loss function, to determine the gradient with minimal error, and weight and. Most of the solutions in stackoverflow was pointing out to learning rate. labels are binary. Editor's Note: This is the fourth installment in our blog series about deep learning. This architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code. import numpy as np import tensorflow as tf import tensorflow_probability as tfp # Create fictitious training data. In a regression problem, we aim to predict the output of a continuous value, like a price or a probability. Dropbox TensorFlow Machine Learning Cookbook. 5 minute read. However, MSE loss tends to generate overly smooth textures for the output images. For (2) Defining the Optimizer is easy and simple as it gets in case of TensorFlow, but Theanno gives you a great deal of Control overt the Optimizers although its quite lengthy and increases Verification Effort. I changed MSE loss to BinaryCrossentropy but the result is still the same. Probabilistic planning in continuous state-action MDPs using TensorFlow. tensorflow--均方误差(MSE, mean squared error)表示方法 2018-04-24 20:56:13 若夫萤雨霏霏 阅读数 26312 分类专栏: tensorflow python. Where θ is our parameter, α is the learning rate or step size and loss is our loss function. So, now we have our model saved. add_to_collection(). Pointwise loss. TensorFlow Estimators: Managing Simplicity vs. They display the loss (on the most recent batch) and mse (on the whole dataset) at the end of each epoch. Installand/or&UpgradePip • Installingpip • Already)installed)if)you're)using)Python)2)>=2. When training a model we want to see the loss go down. The TensorFlow API is computation using data flow graphs for scalable machine learning. Deep Learning TensorFlow PyTorch MXNet Loss Function. Generally In machine learning models, we are going to predict a value given a set of inputs. sigmoid_cross_entropy_with_logits. By the end of this course, you'll have a complete understanding to use the power of TensorFlow 2. Meine Eingabewerte sind natürlich auch kontinuierliche reale Werte. They are extracted from open source Python projects. clone_metrics(metrics) Clones the given metric list/dict. 429111 After 1000 training step(s), loss_mse on all data is 0. My implementation in keras is working but not in tensorflow. 最好使用不同比例和范围的特征进行标准化。 虽然模型可能在没有特征归一化的情况下收敛,但它使训练更加困难,并且它使得结果模型依赖于输入中使用的单位的选择。. Well, I tried using cross entropy as loss function, but the output was always a blob, and I noticed that the weights from X to e1 would always converge to an zero-valued matrix. 0001) tvars. Optimizing classification metrics. Later I tried compute the BinaryCrossentropy loss explicitly in my forward pass then use model. Optimizer —This is how the model is updated based on the data it sees and its loss - - function. MSE is the straight line between two points in Euclidian space, in neural network, we apply back propagation algorithm to iteratively minimize the MSE, so the network can learning from your data, next time when the network see the similar data, the inference result should be similar to the output of the training output. Custom Loss Functions. TensorFlow calls them estimators Low-level API: Build the. push event tensorflow/mlir. This seems to be a mistake from the tutorial itself when combining model. reduce_mean(tf. least-squares loss Posted on May 5, 2017 May 8, 2017 by ilyaivensky In this experiment, we compare side-by-side 2 different loss functions, keeping all other settings such as the architecture, update methods, mini-batch normalization, regularization and learning rates fixed (after a brief and very coarse grid. CNN can be also adapted to voice recognition and other fields. Therefore, if you use loss to check the resubstitution (training) error, then there is a discrepancy between the MSE and optimization results that fitrlinear returns. Must have downloaded TensorFlow source, and be able to build it. Regarding that, I don't know how we should deal with the dec_inp (decoder input) variable, what I try to do seems not already done in Tensorflow, yet especially simple. loss returns the MSE by default. We'll be running a Double Q network on a modified version of the Cartpole reinforcement learning environment. We will install Theano and Tensorflow as backend libraries for Keras, along with some more libraries which are useful. They display the loss (on the most recent batch) and mse (on the whole dataset) at the end of each epoch. Building and training machine learning models in the browser ushers in a whole new set of possibilities for web developers. TensorFlow makes a number of standard loss functions available in tf. If, by "good loss function," you mean using MSE, then your optimal performance is the expected mean under the true distribution, causing loss of high-frequency details. mse to all probes in data if no explicit objective is given (mirroring the default behaviour in sim. 製品一覧 > ディクセル r01タイプ フロント左右セット ブレーキパッド カローラ/スプリンター/セダン ae100/ce100/ee101 311046. TECHNICAL PAPERS SOLID-STATE SCIENCE AND TECHNOLOGY Colloidal Processing of BaCeO3-Based Electrolyte Films Vishal Agarwal and Meilin Liu** School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0245, USA. 在TensorFlow中训练模型时,通过损失函数告诉TensorFlow预测结果相比目标结果是好还是坏。在多种情况下,我们会给出模型训练的样本数据和目标数据,损失函数即是比较预测值与给定的目标值之间的差异。 下面将介绍在TensorFlow中常用的损失函数。. How to write a custom loss function in Tensorflow? Take a look at Adding a New Op. However, MSE loss tends to generate overly smooth textures for the output images. The following are code examples for showing how to use tensorflow. We could have used mse (mean squared error), but we used categorical_crossentropy. When training a model we want to see the loss go down. This allows us to feed the values in an element-by. TensorFlow_Probability make it easier for probabilistic reasoning and statistical analysis. Well, I tried using cross entropy as loss function, but the output was always a blob, and I noticed that the weights from X to e1 would always converge to an zero-valued matrix. They display the loss and mse, averaged over the whole dataset, at the end of each epoch. While the sequence autoencoder is much more conceptually complex, it is still quick to implement and understand in TensorFlow. Music source separation is a kind of task for separating voice from music such as pop music. TensorFlow package in R does not support for API to TensorFlow_Probability yet, so we can run python code through reticulate package who helps to connect R and python. We are going to create an autoencoder with a 3-layer encoder and 3-layer decoder. A list of metrics. I still need very flexible losses. gl/HTE7uB) will help you in understanding various important basics of TensorFlow. I copied my tensorflow code below (roughly, as it contains more extra stuff, but the basis is the same…). Here, we created a placeholder for x with shape=(None). TensorFlow Lite for mobile and embedded devices For Production TensorFlow Extended for end-to-end ML components. embedding space The d-dimensional vector space that features from a higher-dimensional vector space are mapped to. However, the results of them are very different. 1 • Red Hat 6. TFlearn is a modular and transparent deep learning library built on top of Tensorflow. To improve the knowledge of the network, some optimization is required by adjusting the weights of the net. Machine Learning Steps. We will illustrate how to create a one hidden layer NN. Install and/or Upgrade Pip • Loss • 'mse' -mean. Below are the different types of loss function in machine learning which are as follows: 1) Regression loss functions: Linear regression is a fundamental concept of this function. Can someone give me a tip on how I could incorporate MSE & loss plots? I have been following some machinelearningmastery posts to plot this but the application is classification and I am attemp. In a simple word, the machine takes, let's say an image, and can produce a closely related picture. If the magnitude of the loss function is high, it means our algorithm is showing a lot of variance in the result and needs to be corrected. In statistics and machine learning, linear regression is a technique that’s frequently used to measure the relationship between variables. Data corrupts. Batch Normalization. It was developed by François Chollet, a Google engineer. My inputs values are, of course, continuous real values too. This guest post by Giancarlo Zaccone, the author of Deep Learning with TensorFlow, shows how to run linear regression on a real-world dataset using TensorFlow. loss: float or ndarray of floats A non-negative floating point value (the best value is 0. AdamOptimizer(). Mes valeurs d'entrées sont, bien sûr, des valeurs réelles continues aussi. In this case, because our metric is a measure of error, we want to see it go down as well. They are extracted from open source Python projects. > Intro to TensorFlow with Linear Regression Intro to Tensorflow We've looked at two different ways to run linear regression with gradient descent in Python : implementing the algorithm from scratch and using scikit-learn. Adds a Sum-of-Squares loss to the training procedure. add_loss() and keras. Data Preparation. The 10 Important Updates from TensorFlow 2. The Tensorflow loss function built for RNNs seems to address the cases where we directly want to train labels or words embeddings, so I tried to compute the loss myself. 0 教程-使用keras训练模型. The network is trained for 100 epochs and a batch size of 1 is used. Universal Function Approximation using TensorFlow. metrics , such as precision, recall, accuracy, auc, MSE, as well as their streaming versions. TensorFlow 2. Apply the optimizer to the variables / gradients tuple. Machinelearningmastery. In addition to the metrics above, you may use any of the loss functions described in the loss function page as metrics. This guest post by Giancarlo Zaccone, the author of Deep Learning with TensorFlow, shows how to run linear regression on a real-world dataset using TensorFlow. import numpy as np def MSE(y, y_hat): num_ex = len(y) mse_loss = np. Can someone give me a tip on how I could incorporate MSE & loss plots? I have been following some machinelearningmastery posts to plot this but the application is classification and I am attemp. loss: float or ndarray of floats A non-negative floating point value (the best value is 0. square(dehazed_X-Y)) 4. In this tutorial I will showcase the upcoming TensorFlow 2. It was developed with a focus on enabling fast experimentation. Softmax is a type of activation layer and is given by which allows us to interpret the outputs as probabilities, while cross-entropy loss is what we use to. The loss_collection argument is ignored when executing eagerly. 1 개요 추상화(abstraction). You can vote up the examples you like or vote down the ones you don't like. Describe the expected behavior I expect the loss values to fall on the MSE surface. Define the optimizer operation. #Loss Functions Loss functions: MAE, MAPE, MSE, RMSE and RMSLE. TensorFlow中常见的loss函数. loss will now apply nengo_dl. Absolute Data corrupts absolutely. " Some Thoughts About The Design Of Loss Functions (Paper) - "The choice and design of loss functions is discussed. Pointwise loss. Edward (which runs on top of TensorFlow) implements a few types of MCMC samplers, but not the NUTS sampler. To quantify your findings, you can compare the network's MSE loss to the MSE loss you obtained when doing the standard averaging (0. MNIST with CNN Layer from TensorFlow. Instantiating an Estimator. Documentation for the TensorFlow for R interface. TensorFlow Lite for mobile and embedded devices For Production TensorFlow Extended for end-to-end ML components. FittedLoss is 'mse', then the loss term in the objective function is half of the MSE. mse_loss¶ paddle. 2486 Testing set Mean Abs Error: 1. compile(optimizer=some_opt, loss=some_loss) return model What changes should I make to the code to be able to run on two GPUs, perhaps using a batch size of 128 instead of 64?. For (2) Defining the Optimizer is easy and simple as it gets in case of TensorFlow, but Theanno gives you a great deal of Control overt the Optimizers although its quite lengthy and increases Verification Effort. Types of Loss Functions in Machine Learning. Only MSE loss and SGD optimizer have been developed so far. From the graph, you can see that the giant node GrandientDescentOptimizer depends on 3. Adds a Sum-of-Squares loss to the training procedure. It does not handle low-level operations such as tensor products, convolutions and so on itself. In this post we will train an autoencoder to detect credit card fraud. float32 n = 3000 # number of samples x_size = 4 # size of single x def make. However, Tensorflow is more than that, it is a general purpose computing library. import numpy as np import tensorflow as tf import tensorflow_probability as tfp # Create fictitious training data. metrics , such as precision, recall, accuracy, auc, MSE, as well as their streaming versions. 4)binaries) from)python. It then runs this loss function on the validation data. 0001) tvars. The Tensorflow loss function built for RNNs seems to address the cases where we directly want to train labels or words embeddings, so I tried to compute the loss myself. We were presenting some of our own work on ranking optimization strategies at the same workshop. def dice_coe (output, target, loss_type = 'jaccard', axis = (1, 2, 3), smooth = 1e-5): """Soft dice (Sørensen or Jaccard) coefficient for comparing the similarity of two batch of data, usually be used for binary image segmentation i. It can be run on your local machine and conveyed to a cluster if the TensorFlow versions are the same or later. We will install Theano and Tensorflow as backend libraries for Keras, along with some more libraries which are useful. Many to One RNN with Fixed Sequence Length:¶ In this tutorial we implement. • TensorFlow 1. My output looks like, [x y z xx yy zz] and I want to minimize the MSE of each term. keras or tf-2. Lightgbm Vs Catboost. This is the objective that the model will try to minimize. The network has a visible layer with 1 input, a hidden layer with 4 LSTM blocks or neurons, and an output layer that makes a single value prediction. Instantiating an Estimator. add_loss seems an interesting way to do this. Probabilistic planning in continuous state-action MDPs using TensorFlow. The model will be written in Python (3) and use the TensorFlow library. Note that in the code snippet from @Slacker-WY, the loss function was updated from "binary_crossentropy" to "mse" (mean square error), which I think is the cause of issue here. " Feb 13, 2018. After 500 training step(s), loss_mse on all data is 0. Certain TensorFlow operations (like matrix multiplication tf. Develop linear regression code with one of TensorFlow's high-level APIs. Here we apply embeddings to a common task in collaborative filtering - predicting user ratings - and on our way, strive for a better understanding of what an embedding layer really does. training import RAdamOptimizer RAdamOptimizer (learning_rate = 1e-3) Use Warmup from keras_radam import RAdam RAdam (total_steps = 10000, warmup_proportion = 0. 有人可以解释一下model. ['ConditionalDistribution', 'Autoregressive', 'BatchReshape', 'Bernoulli', 'Beta', 'Multinomial', 'Binomial', 'Blockwise', 'Categorical', 'Cauchy', 'Gamma. I have a model with N inputs and 6 outputs after each epoch. If we manage to lower MSE loss on either the training set or the test set, how would this affect the Pearson Correlation coefficient between the target vector and the predictions on the same set. When training a model we want to see the loss go down. I have trained the following model in Keras: from keras. A callback is a set of functions to be applied at given stages of the training procedure. Generally In machine learning models, we are going to predict a value given a set of inputs. 5 will in turn mean the decoded images will have very hard, pixellized edges. Training the model Now that we have the model, loss function, and optimizer function defined, train the model to learn the parameters, w, and b. The following are code examples for showing how to use tensorflow. In order to train the model, the TensorFlow loops through the data and it should find the optimal line (as we have a linear model) that fits the data. MSE=>有以下两种表现形式来看看代码实现:importtensorflowastfy=tf. • TensorFlow 1. I changed MSE loss to BinaryCrossentropy but the result is still the same. “TensorFlow Basic - tutorial. TensorFlow vs. The most important thing is to remember to perform actions on tensors. Mechanically Stabilized Earth Wall Abutments for Bridge Support Introduction Using MSE structures as direct bridge abutments would be a significant simplification in the design and construction of current bridge abutment systems and would lead to faster construction of highway bridge infrastructures. Today you will learn how to make some predictions in the Forex market. Welcome to the fifth lesson 'Introduction to TensorFlow' of the Deep Learning Tutorial, which is a part of the Deep Learning (with TensorFlow) Certification Course offered by Simplilearn. Quadratic weighted Kappa. Particularly when computational methods like cross. 1, min_lr = 1e-5) Q & A About Correctness. It is designed to be modular, fast and easy to use. The Spaun benchmark network will now be installed through pip rather than manually cloning and importing the repo. This is good sign that the model is learning something useful. Loss Functions in Machine Learning (MAE, MSE, RMSE) Loss Function indicates the difference between the actual value and the predicted value. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. this will create a data that will allow our model to look time_steps number of times back in the past in order to make a prediction. Photo-realistic single image super-resolution using a generative adversarial network Ledig et al. So, now we have our model saved. This architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code. The TensorFlow API is computation using data flow graphs for scalable machine learning.