[tensorflow学习] 基础知识1

                            第一部分  构图

第一个例子

我们用tensorflow来写一个非常简单的程序,程序的目的是实现以下图形算法:


import tensorflow as tf

a=tf.constant(5,name='input_a')

b=tf.constant(3,name='input_b')

c=tf.multiply(a,b,name='mul_c')

d=tf.add(a,b,name='add_c')

e=tf.add(c,d,name='add_e')

sess=tf.Session()

print(sess.run(e))

输出结果:23

需要注意,加法是tf.add,但是由于版本的问题,注意以下几个函数的改动:

tf.mul---> tf.multiply

tf.sub---> tf.subtract

tf.neg---> tf.negative

第二个例子

tensorflow中的基本运算有:

tf. add,    tf.subtract,    tf.multiply,   tf.div,  tf.truediv(保证浮点除), tf.negative,   tf.mod

注意:tf.div performs either integer division or floating point division depending on the type of input provided. If you want to ensure floating point division, try out tf.truediv!

下面我们学习tf.subtract, tf.mod, tf.div等基本运算的使用方法

import tensorflow as tf

a=tf.constant(5,name='input_a')

b=tf.constant(3,name='input_b')

c=tf.div(a,b,name='mul_c')

d=tf.subtract(a,b,name='add_c')

e=tf.mod(c,d,name='add_e')

sess=tf.Session()

print(sess.run(e))

输出结果:1


                             第二部分  使用tensorboard

先附上代码,如下:

import tensorflow as tf

a=tf.constant(5,name='input_a')

b=tf.constant(3,name='input_b')

c=tf.multiply(a,b,name='mul_c')

d=tf.add(a,b,name='add_c')

e=tf.add(c,d,name='add_e')

'''##这个屏蔽里面的代码也可以成功运行

with tf.Session() as sess:

          print(sess.run(e))

          writer=tf.summary.FileWriter('./my_graph',sess.graph)

writer.close()

'''

sess=tf.Session()

print(sess.run(e))

writer=tf.summary.FileWriter('./my_graph',sess.graph)

writer.close()##注意一定要关掉writer,否则my_graph路径下不会有图形描述文件输出。

sess.close()##关闭Session对象,这个可关可不关,都会得到如下tensorboard描述的内容。

我们只是在第一部分的代码中加入了最后一句,tf.summary.FileWriter是创建一个tensorflowsummary.FileWriter对象,tf.summary.FileWriter传入两个参数用于初始化对象,第一个参数是输出路径字符串,也就是图形描述保存在磁盘上的文件名 (i.e., where the graph description will be stored on disk),在这里,所生成的文件会保存在./mygraph这个文件夹下。第二个参数,也就是会话(Session)的图形属性,或者说tensorflow默认的图(The second input we

pass into SummaryWriter is the graph attribute of our Session). tf.Session objects, as managers of graphs defined in TensorFlow, have a graph attribute that is a reference to the graph they are keeping track of. By passing this on to FileWriter, the writer will output a description of the graph inside the “my_graph” directory.

上面程序执行完后,在当前路径下会有一个my_graph的文件夹,打开后可以看到如下图所示内容:


然后在终端中输入:

 tensorboard --logdir='my_graph'

终端中输出信息如下图所示,You should see some log info print to the console, and then the message “Starting TensorBoard on port 6006”.What you’ve done is start up a TensorBoard server that is using data from the “my_graph” directory 。By default, the server started on port 6006- to access TensorBoard(就跟我们通常的服务器一样,都要有地址和端口号,那么tensorboard server也需要有端口号,默认是6006), open up a browser and type http://localhost:6006(这个不一定对,要根据实际情况来,比如我的就是要输入如下图所示的地址). You’ll be greeting with an orange-and-white-themed screen:

打开浏览器,输入http://ta:6006,然后浏览器中可以看到如下图橘黄色的界面。(注意上图这个tensorboard窗口不能关了,否则看不到结果)


Don’t be alarmed by the “No scalar data was found(我的是No dashboards are ...)” warning message. That just means that we didn’t save out any summary statistics for TensorBoard to display- normally, this screen would show us information that we asked TensorFlow to save using our SummaryWriter. Since we didn’t write any additional stats, there’s nothing to display. That’s fine, though, as we’re here to admire our beautiful graph. Click on the “Graphs” link at the top of the page, and you should see a screen similar to this:


从图中可以看到我们给节点命名的参数。If you click on the nodes, you can get information about them such as which other nodes they are attached to. You’ll notice that the “inputs”, a and b appear to be duplicated, but if you hover or click on either of the nodes labeled “input_a”, you should see that they both get a highlighted together(如下面第二个图两个红圈,其实我只点击了左边的input_a而已). This graph doesn’t look exactly like the graph we drew above, but it is the same graph since the “input” nodes are simply shown twice. Pretty awesome!


                           第三部分 N-D tensor

一、知识点:

   1).构建N-D tensor;

   2). tf.reduce_prod, tf.reduce_sum的使用方法。这两个函数的用法是:

(a)定义 tf.reduce_prod (input_tensor,  axis=None,  keep_dims=False,  name=None,     reduction_indices=None)

       此函数计算一个张量的各个维度上元素的乘积。函数中的input_tensor是按照axis中给定的维度来减少的;除非 keep_dims 是true,否则张量的秩将在axis的每个条目中减少1;如果keep_dims为true,则减小的维度将保留为长度1。 

如果axis没有条目,则缩小所有维度,并返回具有单个元素的张量。

参数:

input_tensor:要减少的张量。应该有数字类型。

axis:要减小的尺寸。如果为None(默认),则将缩小所有尺寸。必须在[-rank(input_tensor), rank(input_tensor))范围内。

keep_dims:如果为true,则保留长度为1的缩小维度。

name:操作的名称(可选)。

reduction_indices:axis的废弃的名称。

返回:

结果返回减少的张量。

numpy兼容性

相当于np.prod

(b)定义:tf.reduce_sum ( input_tensor,axis = None,keep_dims = False,name = None, reduction_indices = None)

此函数计算一个张量的各个维度上元素的总和。函数中的input_tensor是按照axis中已经给定的维度来减少的;除非 keep_dims 是true,否则张量的秩将在axis的每个条目中减少1;如果keep_dims为true,则减小的维度将保留为长度1。如果axis没有条目,则缩小所有维度,并返回具有单个元素的张量。

例如:

x = tf.constant([[1,1,1], [1,1,1]])

tf.reduce_sum(x)  #6

tf.reduce_sum(x,0)  # [2,2,2]

tf.reduce_sum(x,1)  # [3,3]

tf.reduce_sum(x,1, keep_dims=True)  # [[3], [3]]

tf.reduce_sum(x, [0,1])  #6

参数:

input_tensor:要减少的张量。应该有数字类型。

axis:要减小的尺寸。如果为None(默认),则缩小所有尺寸。必须在范围[-rank(input_tensor), rank(input_tensor))内。

keep_dims:如果为true,则保留长度为1的缩小尺寸。

name:操作的名称(可选)。

reduction_indices:axis的废弃的名称。

返回:

该函数返回减少的张量。

numpy兼容性

相当于np.sum

二、张量的概念

        什么是张量?Tensors are simply the n-dimensional abstraction of matrices. So a 1-D tensor would be equivalent to a vector, a 2-D tensor is a matrix, and above that you can just say “N-D tensor”. 

       根据张量的概念,我们可以将下面第一张图,改为用张量来表示(下面第二张图),这样做的好处是:

1). The client only has to send input to a single node, which simplifies using the graph.

2). The nodes that directly depend on the input now only have to keep track of one

dependency instead of two。


三、构建张量图

    附上我们的实验代码:

import tensorflow as tf

a = tf.constant([5,3], name="input_a")

b = tf.reduce_prod(a, name="prod_b")

c = tf.reduce_sum(a, name="sum_c")

d = tf.add(b,c, name="add_d")

sess=tf.Session()

print(sess.run(d))

    输出如下所示:

                           第四部分 TF数据类型

1. 数据类型

      TF的数据类型如下表格所示。

       上面表格中的最后几个tf.qint8, tf.qint32, tf.quint8很少见到,笔者到目前为止,也还不知道怎么使用它们。

2. TF数据类型与Numpy的关系

    TensorFlow’s data types are based on NumPy; in fact, the statement np.int32 ==tf.int32 returns True! Any NumPy array can be passed into any TensorFlow Op, and the beauty is that you can easily specify the data type you need with minimal effort.

    As a bonus, you can use the functionality of the numpy library both before and after running your graph, as the tensors returned from Session.run are NumPy arrays.

      注意下图中提示的,numpy字符类型与TF字符类型的不同点,也就是如果TF要用np中的string数据类型,那么就不要在np.array()中指定数据类型,也就是下面第一个是正确的,第二个则是错误的。

1) np.array([b"apple", b"peach", b"grape"])    正确

2)np.array([b"apple", b"peach", b"grape"], dtype=np.string)  错误

举例:

import numpy as np # Don't forget to import NumPy!

# 0-D Tensor with 32-bit integer data type

t_0 = np.array(50, dtype=np.int32)

# 1-D Tensor with byte string data type

# Note: don't explicitly specify dtype when using strings in NumPy

t_1 = np.array([b"apple", b"peach", b"grape"])

# 2-D Tensor with boolean data type

t_2 = np.array([[True, False, False],[False, False, True],[False, True, False]], dtype=np.bool)

# 3-D Tensor with 64-bit integer data type

t_3 = np.array([[ [0, 0], [0, 1], [0, 2] ],[ [1, 0], [1, 1], [1, 2] ],[ [2, 0], [2, 1], [2, 2] ]],dtype=np.int64)

注意: Although TensorFlow is designed to understand NumPy data types natively, the converse is not true. Don’t accidentally try to initialize a NumPy array with tf.int32!

3. Tensor形状

在tensorflow中,既可以使用Python列表,也可以使用tuple来指定tensor的形状。The shape, in TensorFlow terminology, describes both the number dimensions in a tensor as well as the length of each dimension. Tensor shapes can either be Python lists or tuples containing an ordered set of integers.

# Shapes that specify a 0-D Tensor (scalar)

# e.g. any single number: 7, 1, 3, 4, etc.

s_0_list = []

s_0_tuple = ()

# Shape that describes a vector of length 3

# e.g. [1, 2, 3]

s_1 = [3]

# Shape that describes a 3-by-2 matrix

# e.g [[1 ,2], [3, 4], [5, 6]]

s_2 = (3, 2)

    In addition to being able to specify fixed lengths to each dimension, you are also able assign a flexible length by passing in None as a dimension’s value. Furthermore, passing in the value None as a shape (instead of using a list/tuple that contains None), will tell TensorFlow to allow a tensor of any shape. That is, a tensor with any amount of dimensions and any length for each dimension:

# Shape for a vector of any length:

s_1_flex = [None]

# Shape for a matrix that is any amount of rows tall, and 3 columns wide:

s_2_flex = (None, 3)

# Shape of a 3-D Tensor with length 2 in its first dimension, and variable-

# length in its second and third dimensions:

s_3_flex = [2, None, None]

# Shape that could be any Tensor

s_any = None

If you ever need to figure out the shape of a tensor in the middle of your graph, you can use the tf.shape Op. It simply takes in the Tensor object you’d like to find the shape for, and returns it as an int32 vector:

import tensorflow as tf

# ...create some sort of mystery tensor

# Find the shape of the mystery tensor

shape = tf.shape(mystery_tensor, name="mystery_shape")

Remember that tf.shape, like any other Operation, doesn’t run until it is executed inside of a Session.

                 第五部分  TensorFlow operations

构建TF操作如下代码所示

import tensorflow as tf

import numpy as np

# Initialize some tensors to use in computation

a = np.array([2, 3], dtype=np.int32)

b = np.array([4, 5], dtype=np.int32)

# Use `tf.add()` to initialize an "add" Operation

# The variable `c` will be a handle to the Tensor output of this Op

c = tf.add(a, b)

TF重载了常见的数学运算符号(比如+,-, x, / 等),所以如果如果一个或多个操作数是tensor对象,那么我们可以这样写:

# Assume that `a` and `b` are `Tensor` objects with matching shapes

c = a + b

上面这个代码等效于c = tf.add(a, b),所以符号被重载了,这样更加简洁。

TF中重载的符号有:

        注意上面的绝对值,没有|x|,而必须写abs(x),要不就用tf.abs(x)。另外,Technically, the == operator is overloaded as well, but it will not return a Tensor of boolean values. Instead, it will return True if the two tensors being compared are the same object, and False otherwise. This is mainly used for internal purposes. If you’d like to check for equality or inequality, check out tf.equal() and tf.not_equal, respectively.

       虽然使用上面的重载运算符能够快速编写代码,但是不能给操作命名,所以如果要给操作命名(如用于tensorboard显示),就不要用重载运算符号,而是用tf的操作函数。

                         第六  TensorFlow graphs

       首先我们来看下如何创建TF graph。Creating a Graph is simple- its constructor doesn’t take any variables

import tensorflow as tf

# Create a new graph:

g = tf.Graph()

        Once we have our Graph initialized, we can add Operations to it by using the Graph.as_default() method to access its context manager. In conjunction with the with statement, we can use the context manager to let TensorFlow know that we want to add Operations to a specific Graph:

with g.as_default():

            # Create Operations as usual; they will be added to graph `g`

           a = tf.mul(2, 3)

            ...

       You might be wondering why we haven’t needed to specify the graph we’d like to add our Ops to in the previous examples. As a convenience, TensorFlow automatically creates a Graph when the library is loaded and assigns it to be the default. Thus, any Operations, tensors, etc. defined outside of a Graph.as_default() context manager will automatically be placed in the default graph:

# Placed in the default graph

in_default_graph = tf.add(1,2)

# Placed in graph `g`

with g.as_default():

              in_graph_g = tf.mul(2,3)

# We are no longer in the `with` block, so this is placed in the default graph

also_in_default_graph = tf.sub(5,1)

If you’d like to get a handle to the default graph, use the tf.get_default_graph() function:

default_graph = tf.get_default_graph()

       In most TensorFlow programs, you will only ever deal with the default graph. However, creating multiple graphs can be useful if you are defining multiple models that do not have interdependencies. When defining multiple graphs in one file, it’s best practice to either not use the default graph or immediately assign a handle to it. This ensures that nodes are added to each graph in a uniform manner.

       Additionally, it is possible to load in previously defined models from other TensorFlow scripts and assign them to Graph objects using a combination of the Graph.as_graph_def() and tf.import_graph_def functions. Thus, a user can compute and use the output of several separate models in the same Python file. We will cover importing and exporting graphs later in this book.

                      第七部分   TensorFlow Sessions

一、Session简介

     session的作用是什么?是负责执行图形的(responsible for graph execution),用于运行TensorFlow操作的类。一个Session对象封装了Operation执行对象的环境,并对Tensor对象进行计算。例如:

# Build a graph.

a = tf.constant(5.0)

b = tf.constant(6.0)

c = a * b

# Launch the graph in a session.

sess = tf.Session()

# Evaluate the tensor `c`.

print(sess.run(c))

      session可能拥有的资源,如:tf.Variable,tf.QueueBase和tf.ReaderBase。不再需要时释放这些资源是非常重要的。为此,请在session中调用tf.Session.close方法,或使用session作为上下文管理器。以下两个例子是等价的:

# Using the `close()` method.

sess = tf.Session()

sess.run(...)

sess.close()

# Using the context manager.

with tf.Session() as sess:

  sess.run(...)


二、创建Session对象

Session 方法

__init__

__init__(    target='',    graph=None,    config=None)

创建一个新的TensorFlow session。如果在构建session时没有指定graph参数,则将在session中启动默认关系图。如果使用多个图(在同一个过程中使用tf.Graph()创建,则必须为每个图使用不同的session,但是每个图都可以用于多个session中,在这种情况下,将图形显式地传递给session构造函数通常更清晰。

方法参数

target:(可选)要连接到的执行引擎。默认使用进程内引擎。

graph:(可选)将被启动的Graph(如上所述)。

config:(可选)具有session配置选项的ConfigProto协议缓冲区。

以下两种创建session对象的方法是等效的:

sess = tf.Session()

sess = tf.Session(graph=tf.get_default_graph())

一旦Session对象创建完毕,就可以使用该对象的方法run()进行计算:

sess.run(b)


run的定义如下:

run( fetches, feed_dict=None, options=None, run_metadata=None)

a) fetches

-------fetches accepts any graph element (either an Operation or Tensor object), which specifies what the user would like to execute. If the requested object is a Tensor, then the output of run() will be a NumPy array. If the object is an Operation, then the output will be None. In the above example, we set fetches to the tensor b (the output of the tf.mul Operation).This tells TensorFlow that the Session should find all of the nodes necessary to compute the value of b, execute them in order, and output the value of b. We can also pass in a list of graph elements:

sess.run([a, b]) # returns [7, 21]


In addition using fetches to get Tensor outputs, you’ll also see examples where we give fetches a direct handle to an Operation which a useful side-effect when run. An example of this is tf.initialize_all_variables(), which prepares all TensorFlow Variable objects to be used. We still pass the Op as the fetches parameter, but the result of Session.run() will be None:

# Performs the computations needed to initialize Variables, but returns `None`

sess.run(tf.initialize_all_variables())

b) feed_dict

-------The parameter feed_dict is used to override Tensor values in the graph, and it expects a Python dictionary object as input. The keys in the dictionary are handles to Tensor objects that should be overridden, while the values can be numbers, strings, lists, or NumPy arrays (as described previously). The values must be of the same type (or able to be converted to the same type) as the Tensor key. Let’s show how we can use feed_dict to overwrite the value of a in the previous graph:

import tensorflow as tf

# Create Operations, Tensors, etc (using the default graph)

a = tf.add(2, 5)

b = tf.multiply(a, 3)

# Start up a `Session` using the default graph

sess = tf.Session()

# Define a dictionary that says to replace the value of `a` with 15

replace_dict = {a: 15}

# Run the session, passing in `replace_dict` as the value to `feed_dict`

sess.run(b, feed_dict=replace_dict) # returns 45

       Notice that even though a would normally evaluate to 7, the dictionary we passed into feed_dict replaced that value with 15. After you are finished using the Session, call its close() method to release unneeded resources.

       As an alternative, you can also use the Session as a context manager, which will automatically close when the code exits its scope:

   with tf.Session() as sess:

              # Run graph, write summary statistics, etc.

               ...

              # The Session closes automatically

          We can also use a Session as a context manager by using its as_default() method. Similarly to how Graph objects can be used implicitly by certain Operations, you can set a session to be used automatically by certain functions. The most common of such functions are Operation.run() and Tensor.eval(), which act as if you had passed them in to  Session.run() directly.

# Define simple constant

a = tf.constant(5)

# Open up a Session

sess = tf.Session()

# Use the Session as a default inside of `with` block

with sess.as_default():

           a.eval()

# Have to close Session manually.

sess.close()

Adding Inputs with Placeholder nodes

        Placeholders, as the name implies, act as if they are Tensor objects, but they do not have their values specified when created. Instead, they hold the place for a Tensor that will be fed at runtime, in effect becoming an “input” node. Creating placeholders is done using the tf.placeholder Operation:

import tensorflow as tf

import numpy as np

# Creates a placeholder vector of length 2 with data type int32

a = tf.placeholder(tf.int32, shape=[2], name="my_input")

# Use the placeholder as if it were any other Tensor object

b = tf.reduce_prod(a, name="prod_b")

c = tf.reduce_sum(a, name="sum_c")

# Finish off the graph

d = tf.add(b, c, name="add_d")

        tf.placeholder takes in a required parameter dtype, as well as the optional parameter shape:

1) dtype specifies the data type of values that will be passed into the placeholder. This is required, as it is needed to ensure that there will be no type mismatch errors.

2) shape specifies what shape the fed Tensor will be. See the discussion on Tensor shapes above. The default value of shape is None, which means a Tensor of any shape will be accepted.

      In order to actually give a value to the placeholder, we’ll use the feed_dict parameter in Session.run(). We use the handle to the placeholder’s output as the key to the dictionary (in

the above code, the variable a), and the Tensor object we want to pass in as its value:

# Open a TensorFlow Session

sess = tf.Session()

# Create a dictionary to pass into `feed_dict`

# Key: `a`, the handle to the placeholder's output Tensor

# Value: A vector with value [5, 3] and int32 data type

input_dict = {a: np.array([5, 3], dtype=np.int32)}

# Fetch the value of `d`, feeding the values of `input_vector` into `a`

sess.run(d, feed_dict=input_dict)

        You cannot fetch the value of placeholders- it will simply raise an exception if you try to feed one into Session.run()

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 212,294评论 6 493
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,493评论 3 385
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 157,790评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,595评论 1 284
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,718评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,906评论 1 290
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,053评论 3 410
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,797评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,250评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,570评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,711评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,388评论 4 332
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,018评论 3 316
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,796评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,023评论 1 266
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,461评论 2 360
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,595评论 2 350

推荐阅读更多精彩内容