Kaggle test: cats-vs-dogs

该数据集来自2013年的Kaggle竞赛,其包含25000张猫狗图像(猫狗各12500张),该问题属于猫狗二分类问题。

1. 建立数据集。

包括三个子集:包含每个类别各1000个样本的训练集,500个样本的验证集和500个样本的测试集。

In [10]:
import os,shutil

original_dataset_dir = 'C:/Users/Xiaoqi/Desktop/deeplearning/dogs-vs-cats/train'

base_dir =  'C:/Users/Xiaoqi/Desktop/deeplearning/dogs-vs-cats-small/'
os.mkdir(base_dir)

### 建立狗和猫的训练、验证和测试数据集
train_dir = os.path.join(base_dir,'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir,'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir,'test')
os.mkdir(test_dir)

train_cats_dir = os.path.join(train_dir,'cats')
os.mkdir(train_cats_dir)
train_dogs_dir = os.path.join(train_dir,'dogs')
os.mkdir(train_dogs_dir)


validation_cats_dir = os.path.join(validation_dir,'cats')
os.mkdir(validation_cats_dir)
validation_dogs_dir = os.path.join(validation_dir,'dogs')
os.mkdir(validation_dogs_dir)

test_cats_dir = os.path.join(test_dir,'cats')
os.mkdir(test_cats_dir)
test_dogs_dir = os.path.join(test_dir,'dogs')
os.mkdir(test_dogs_dir)
In [11]:
## 将图片复制到相应数据集
# cats
fnames = ['cat.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
    src = os.path.join(original_dataset_dir,fname)
    dst = os.path.join(train_cats_dir,fname)
    shutil.copyfile(src,dst)
fnames = ['cat.{}.jpg'.format(i) for i in range(1000,1500)]
for fname in fnames:
    src = os.path.join(original_dataset_dir,fname)
    dst = os.path.join(validation_cats_dir,fname)
    shutil.copyfile(src,dst)
fnames = ['cat.{}.jpg'.format(i) for i in range(1500,2000)]
for fname in fnames:
    src = os.path.join(original_dataset_dir,fname)
    dst = os.path.join(test_cats_dir,fname)
    shutil.copyfile(src,dst)

# dogs
fnames = ['dog.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
    src = os.path.join(original_dataset_dir,fname)
    dst = os.path.join(train_dogs_dir,fname)
    shutil.copyfile(src,dst)

    fnames = ['dog.{}.jpg'.format(i) for i in range(1000,1500)]
for fname in fnames:
    src = os.path.join(original_dataset_dir,fname)
    dst = os.path.join(validation_dogs_dir,fname)
    shutil.copyfile(src,dst)
fnames = ['dog.{}.jpg'.format(i) for i in range(1500,2000)]
for fname in fnames:
    src = os.path.join(original_dataset_dir,fname)
    dst = os.path.join(test_dogs_dir,fname)
    shutil.copyfile(src,dst)
In [15]:
print('Totol number of training cat images:',len(os.listdir(train_cats_dir)))
print('Totol number of validation cat images:',len(os.listdir(validation_cats_dir)))
print('Totol number of test cat images:',len(os.listdir(test_cats_dir)))

print('Totol number of training dog images:',len(os.listdir(train_dogs_dir)))
print('Totol number of validation dog images:',len(os.listdir(validation_dogs_dir)))
print('Totol number of test dog images:',len(os.listdir(test_dogs_dir)))
Totol number of training cat images: 1000
Totol number of validation cat images: 500
Totol number of test cat images: 500
Totol number of training dog images: 1000
Totol number of validation dog images: 500
Totol number of test dog images: 500

2. 从头构建一个小型的神经网络

conv2D(relu) + MaxPooling2D

In [16]:
from keras import layers
from keras import models

model = models.Sequential()
model.add(layers.Conv2D(32,(3,3),activation = 'relu',input_shape = (150,150,3))) ## 上述数据的输入尺寸为150*150,3个色道
model.add(layers.MaxPooling2D((2,2)))

model.add(layers.Conv2D(64,(3,3),activation = 'relu')) 
model.add(layers.MaxPooling2D((2,2)))

model.add(layers.Conv2D(128,(3,3),activation = 'relu')) 
model.add(layers.MaxPooling2D((2,2)))

model.add(layers.Conv2D(128,(3,3),activation = 'relu')) 
model.add(layers.MaxPooling2D((2,2)))

model.add(layers.Flatten())
model.add(layers.Dense(512,activation = 'relu'))
model.add(layers.Dense(1,activation = 'sigmoid')) ## 2分类问题
Using TensorFlow backend.
WARNING:tensorflow:From D:\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:4070: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

In [17]:
model.summary()
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 148, 148, 32)      896       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 74, 74, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 72, 72, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 36, 36, 64)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 34, 34, 128)       73856     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 17, 17, 128)       0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 15, 15, 128)       147584    
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 7, 7, 128)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 6272)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               3211776   
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 513       
=================================================================
Total params: 3,453,121
Trainable params: 3,453,121
Non-trainable params: 0
_________________________________________________________________
In [18]:
from keras import optimizers
model.compile(loss = 'binary_crossentropy',
             optimizer=optimizers.RMSprop(lr = 1e-4),
             metrics = ['acc'])
WARNING:tensorflow:From D:\Anaconda3\lib\site-packages\tensorflow\python\ops\nn_impl.py:180: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where

3. 数据预处理

  1. 读入图像文件.jpg格式
  2. 将jpg格式的图像解码为RGB格式
  3. 将像素格式转换为浮点型张量
  4. 将浮点型张量【0~255】缩放到[0,1]区间
In [19]:
from keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(train_dir,
                                                   target_size=(150,150),
                                                   batch_size=20,
                                                   class_mode='binary')
validation_generator = test_datagen.flow_from_directory(validation_dir,
                                                   target_size=(150,150),
                                                   batch_size=20,
                                                   class_mode='binary')
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
In [21]:
for data_batch, labels_batch in train_generator:
    print('data batch shape: ',data_batch.shape)
    print('label batch shape: ',labels_batch.shape)
    break
    
data batch shape:  (20, 150, 150, 3)
label batch shape:  (20,)
In [22]:
history = model.fit_generator(train_generator,
                             steps_per_epoch=100,
                             epochs=30,
                             validation_data=validation_generator,
                             validation_steps=50)
WARNING:tensorflow:From D:\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

Epoch 1/30
100/100 [==============================] - 13s 126ms/step - loss: 0.6898 - acc: 0.5390 - val_loss: 0.7104 - val_acc: 0.5980
Epoch 2/30
100/100 [==============================] - 5s 52ms/step - loss: 0.6591 - acc: 0.6055 - val_loss: 0.7208 - val_acc: 0.6440
Epoch 3/30
100/100 [==============================] - 5s 52ms/step - loss: 0.6165 - acc: 0.6685 - val_loss: 0.5564 - val_acc: 0.6630
Epoch 4/30
100/100 [==============================] - 5s 52ms/step - loss: 0.5758 - acc: 0.7080 - val_loss: 0.6730 - val_acc: 0.6810
Epoch 5/30
100/100 [==============================] - 5s 52ms/step - loss: 0.5408 - acc: 0.7190 - val_loss: 0.5096 - val_acc: 0.6840
Epoch 6/30
100/100 [==============================] - 5s 52ms/step - loss: 0.5114 - acc: 0.7420 - val_loss: 0.5972 - val_acc: 0.6700
Epoch 7/30
100/100 [==============================] - 5s 52ms/step - loss: 0.4819 - acc: 0.7695 - val_loss: 0.7547 - val_acc: 0.6830
Epoch 8/30
100/100 [==============================] - 5s 52ms/step - loss: 0.4508 - acc: 0.7810 - val_loss: 0.7905 - val_acc: 0.6790
Epoch 9/30
100/100 [==============================] - 5s 52ms/step - loss: 0.4263 - acc: 0.8125 - val_loss: 0.5803 - val_acc: 0.7040
Epoch 10/30
100/100 [==============================] - 5s 52ms/step - loss: 0.4124 - acc: 0.8185 - val_loss: 0.9149 - val_acc: 0.7180
Epoch 11/30
100/100 [==============================] - 5s 53ms/step - loss: 0.3809 - acc: 0.8270 - val_loss: 0.6095 - val_acc: 0.7200
Epoch 12/30
100/100 [==============================] - 5s 52ms/step - loss: 0.3608 - acc: 0.8425 - val_loss: 0.4064 - val_acc: 0.7280
Epoch 13/30
100/100 [==============================] - 5s 52ms/step - loss: 0.3372 - acc: 0.8465 - val_loss: 0.2668 - val_acc: 0.7350
Epoch 14/30
100/100 [==============================] - 5s 52ms/step - loss: 0.3113 - acc: 0.8655 - val_loss: 0.6697 - val_acc: 0.7380
Epoch 15/30
100/100 [==============================] - 5s 52ms/step - loss: 0.2931 - acc: 0.8770 - val_loss: 0.4960 - val_acc: 0.7290
Epoch 16/30
100/100 [==============================] - 5s 51ms/step - loss: 0.2700 - acc: 0.8875 - val_loss: 0.4497 - val_acc: 0.7390
Epoch 17/30
100/100 [==============================] - 5s 52ms/step - loss: 0.2486 - acc: 0.9060 - val_loss: 0.2707 - val_acc: 0.7480
Epoch 18/30
100/100 [==============================] - 5s 52ms/step - loss: 0.2259 - acc: 0.9205 - val_loss: 0.3100 - val_acc: 0.7400
Epoch 19/30
100/100 [==============================] - 5s 51ms/step - loss: 0.1998 - acc: 0.9175 - val_loss: 0.4262 - val_acc: 0.7460
Epoch 20/30
100/100 [==============================] - 5s 52ms/step - loss: 0.1901 - acc: 0.9250 - val_loss: 0.5093 - val_acc: 0.7370
Epoch 21/30
100/100 [==============================] - 5s 52ms/step - loss: 0.1737 - acc: 0.9385 - val_loss: 0.5238 - val_acc: 0.7480
Epoch 22/30
100/100 [==============================] - 5s 51ms/step - loss: 0.1458 - acc: 0.9525 - val_loss: 1.0997 - val_acc: 0.7280
Epoch 23/30
100/100 [==============================] - 5s 52ms/step - loss: 0.1290 - acc: 0.9585 - val_loss: 0.7290 - val_acc: 0.7170
Epoch 24/30
100/100 [==============================] - 5s 52ms/step - loss: 0.1139 - acc: 0.9625 - val_loss: 0.8323 - val_acc: 0.7390
Epoch 25/30
100/100 [==============================] - 5s 51ms/step - loss: 0.1031 - acc: 0.9665 - val_loss: 0.8280 - val_acc: 0.7270
Epoch 26/30
100/100 [==============================] - 5s 52ms/step - loss: 0.0838 - acc: 0.9795 - val_loss: 0.5233 - val_acc: 0.7440
Epoch 27/30
100/100 [==============================] - 5s 52ms/step - loss: 0.0748 - acc: 0.9770 - val_loss: 1.2414 - val_acc: 0.7100
Epoch 28/30
100/100 [==============================] - 5s 51ms/step - loss: 0.0686 - acc: 0.9745 - val_loss: 0.9698 - val_acc: 0.7340
Epoch 29/30
100/100 [==============================] - 5s 52ms/step - loss: 0.0527 - acc: 0.9890 - val_loss: 1.2406 - val_acc: 0.7360
Epoch 30/30
100/100 [==============================] - 5s 52ms/step - loss: 0.0450 - acc: 0.9900 - val_loss: 0.2522 - val_acc: 0.7480
In [23]:
model.save('cats_and_dogs_small_1.h5')
In [26]:
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1,len(acc)+1)

plt.plot(epochs,acc,'bo',label = "Training acc")
plt.plot(epochs,val_acc,'b',label = "Validation acc")
plt.title('Training and validation accuracy')
plt.legend()

plt.figure()
plt.plot(epochs,loss,'bo',label = "Training loss")
plt.plot(epochs,val_loss,'b',label = "Validation loss")
plt.title('Training and validation loss')
plt.legend()

plt.show()

出现过拟合现象。可以考虑如下办法:1. dropout,2. L2正则化,3. 数据增强

4. 数据增强、添加dropout层

In [29]:
datagen = ImageDataGenerator(rotation_range=40, ## 随机旋转的角度范围
                            width_shift_range=0.2, ## 水平左右移动的范围
                            height_shift_range=0.2,
                            shear_range=0.2, # 随机错切变换
                            zoom_range=0.2, # 随机缩放范围
                            horizontal_flip=True, # 随机将一半图像翻转
                            fill_mode='nearest')# 填充创建新像素方法 

添加dropout

In [30]:
model = models.Sequential()
model.add(layers.Conv2D(32,(3,3),activation = 'relu',input_shape = (150,150,3))) ## 上述数据的输入尺寸为150*150,3个色道
model.add(layers.MaxPooling2D((2,2)))

model.add(layers.Conv2D(64,(3,3),activation = 'relu')) 
model.add(layers.MaxPooling2D((2,2)))

model.add(layers.Conv2D(128,(3,3),activation = 'relu')) 
model.add(layers.MaxPooling2D((2,2)))

model.add(layers.Conv2D(128,(3,3),activation = 'relu')) 
model.add(layers.MaxPooling2D((2,2)))

model.add(layers.Flatten())
model.add(layers.Dropout(0.5)) ## add dropout
model.add(layers.Dense(512,activation = 'relu'))
model.add(layers.Dense(1,activation = 'sigmoid')) ## 2分类问题

model.compile(loss = 'binary_crossentropy',
             optimizer=optimizers.RMSprop(lr = 1e-4),
             metrics = ['acc'])
In [31]:
model.summary()
Model: "sequential_2"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_5 (Conv2D)            (None, 148, 148, 32)      896       
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 74, 74, 32)        0         
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 72, 72, 64)        18496     
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 36, 36, 64)        0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 34, 34, 128)       73856     
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 17, 17, 128)       0         
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 15, 15, 128)       147584    
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 7, 7, 128)         0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 6272)              0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 6272)              0         
_________________________________________________________________
dense_3 (Dense)              (None, 512)               3211776   
_________________________________________________________________
dense_4 (Dense)              (None, 1)                 513       
=================================================================
Total params: 3,453,121
Trainable params: 3,453,121
Non-trainable params: 0
_________________________________________________________________

使用数据增强生成器训练CNN

In [33]:
train_datagen = ImageDataGenerator(rescale=1./255,
                                  rotation_range=40,
                                  width_shift_range=0.2,
                                  height_shift_range=0.2,
                                  shear_range=0.2,
                                  zoom_range=0.2,
                                  horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255) ##测试集不增强

train_generator = train_datagen.flow_from_directory(train_dir,
                                                   target_size=(150,150),
                                                   batch_size=32,
                                                   class_mode='binary')
validation_generator = test_datagen.flow_from_directory(validation_dir,
                                                   target_size=(150,150),
                                                   batch_size=32,
                                                   class_mode='binary')
history = model.fit_generator(train_generator,
                             steps_per_epoch=100,
                             epochs=100,
                             validation_data=validation_generator,
                             validation_steps=50)
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
Epoch 1/100
100/100 [==============================] - 19s 193ms/step - loss: 0.4576 - acc: 0.7866 - val_loss: 0.3420 - val_acc: 0.7824
Epoch 2/100
100/100 [==============================] - 18s 176ms/step - loss: 0.4458 - acc: 0.7886 - val_loss: 0.4070 - val_acc: 0.7726
Epoch 3/100
100/100 [==============================] - 18s 176ms/step - loss: 0.4575 - acc: 0.7905 - val_loss: 0.2909 - val_acc: 0.7766
Epoch 4/100
100/100 [==============================] - 18s 178ms/step - loss: 0.4420 - acc: 0.7879 - val_loss: 0.3833 - val_acc: 0.8106
Epoch 5/100
100/100 [==============================] - 18s 175ms/step - loss: 0.4379 - acc: 0.8021 - val_loss: 0.6045 - val_acc: 0.7830
Epoch 6/100
100/100 [==============================] - 19s 191ms/step - loss: 0.4251 - acc: 0.8002 - val_loss: 0.2615 - val_acc: 0.7990
Epoch 7/100
100/100 [==============================] - 18s 176ms/step - loss: 0.4437 - acc: 0.7993 - val_loss: 0.2986 - val_acc: 0.8122
Epoch 8/100
100/100 [==============================] - 18s 180ms/step - loss: 0.4336 - acc: 0.7968 - val_loss: 0.3202 - val_acc: 0.7513
Epoch 9/100
100/100 [==============================] - 17s 174ms/step - loss: 0.4339 - acc: 0.7964 - val_loss: 0.4179 - val_acc: 0.8131
Epoch 10/100
100/100 [==============================] - 17s 174ms/step - loss: 0.4506 - acc: 0.7819 - val_loss: 0.6168 - val_acc: 0.7836
Epoch 11/100
100/100 [==============================] - 18s 183ms/step - loss: 0.4147 - acc: 0.8081 - val_loss: 0.5590 - val_acc: 0.8067
Epoch 12/100
100/100 [==============================] - 18s 175ms/step - loss: 0.4369 - acc: 0.7981 - val_loss: 0.2970 - val_acc: 0.7919
Epoch 13/100
100/100 [==============================] - 18s 177ms/step - loss: 0.4340 - acc: 0.8050 - val_loss: 0.4970 - val_acc: 0.7990
Epoch 14/100
100/100 [==============================] - 17s 175ms/step - loss: 0.4125 - acc: 0.8144 - val_loss: 0.1985 - val_acc: 0.7798
Epoch 15/100
100/100 [==============================] - 18s 180ms/step - loss: 0.4098 - acc: 0.8172 - val_loss: 0.5974 - val_acc: 0.7513
Epoch 16/100
100/100 [==============================] - 18s 177ms/step - loss: 0.4196 - acc: 0.8018 - val_loss: 0.7453 - val_acc: 0.7990
Epoch 17/100
100/100 [==============================] - 17s 174ms/step - loss: 0.4161 - acc: 0.8074 - val_loss: 0.4885 - val_acc: 0.7836
Epoch 18/100
100/100 [==============================] - 19s 193ms/step - loss: 0.4129 - acc: 0.8141 - val_loss: 0.3813 - val_acc: 0.8073
Epoch 19/100
100/100 [==============================] - 17s 175ms/step - loss: 0.3990 - acc: 0.8157 - val_loss: 0.2363 - val_acc: 0.8135
Epoch 20/100
100/100 [==============================] - 18s 179ms/step - loss: 0.3982 - acc: 0.8188 - val_loss: 0.8169 - val_acc: 0.7378
Epoch 21/100
100/100 [==============================] - 17s 175ms/step - loss: 0.4089 - acc: 0.8119 - val_loss: 0.4310 - val_acc: 0.7957
Epoch 22/100
100/100 [==============================] - 18s 179ms/step - loss: 0.4027 - acc: 0.8103 - val_loss: 0.4783 - val_acc: 0.8009
Epoch 23/100
100/100 [==============================] - 19s 186ms/step - loss: 0.3962 - acc: 0.8166 - val_loss: 0.7082 - val_acc: 0.7760
Epoch 24/100
100/100 [==============================] - 18s 180ms/step - loss: 0.4143 - acc: 0.8033 - val_loss: 0.6326 - val_acc: 0.7661
Epoch 25/100
100/100 [==============================] - 18s 175ms/step - loss: 0.3899 - acc: 0.8182 - val_loss: 0.4252 - val_acc: 0.7925
Epoch 26/100
100/100 [==============================] - 18s 176ms/step - loss: 0.3910 - acc: 0.8213 - val_loss: 0.3752 - val_acc: 0.8071
Epoch 27/100
100/100 [==============================] - 17s 175ms/step - loss: 0.3936 - acc: 0.8251 - val_loss: 0.4032 - val_acc: 0.7970
Epoch 28/100
100/100 [==============================] - 18s 181ms/step - loss: 0.3831 - acc: 0.8336 - val_loss: 0.3482 - val_acc: 0.7912
Epoch 29/100
100/100 [==============================] - 18s 177ms/step - loss: 0.3876 - acc: 0.8260 - val_loss: 0.3818 - val_acc: 0.8112
Epoch 30/100
100/100 [==============================] - 17s 173ms/step - loss: 0.3903 - acc: 0.8213 - val_loss: 0.2104 - val_acc: 0.7608
Epoch 31/100
100/100 [==============================] - 18s 181ms/step - loss: 0.3833 - acc: 0.8257 - val_loss: 0.3270 - val_acc: 0.8093
Epoch 32/100
100/100 [==============================] - 17s 173ms/step - loss: 0.3806 - acc: 0.8283 - val_loss: 0.4413 - val_acc: 0.7951
Epoch 33/100
100/100 [==============================] - 18s 176ms/step - loss: 0.3989 - acc: 0.8153 - val_loss: 0.5239 - val_acc: 0.7931
Epoch 34/100
100/100 [==============================] - 17s 174ms/step - loss: 0.3690 - acc: 0.8373 - val_loss: 0.3926 - val_acc: 0.8112
Epoch 35/100
100/100 [==============================] - 19s 189ms/step - loss: 0.3813 - acc: 0.8289 - val_loss: 0.4349 - val_acc: 0.8084
Epoch 36/100
100/100 [==============================] - 19s 192ms/step - loss: 0.3841 - acc: 0.8213 - val_loss: 0.2495 - val_acc: 0.8009
Epoch 37/100
100/100 [==============================] - 18s 181ms/step - loss: 0.3761 - acc: 0.8346 - val_loss: 0.5487 - val_acc: 0.7862
Epoch 38/100
100/100 [==============================] - 18s 184ms/step - loss: 0.3742 - acc: 0.8339 - val_loss: 0.3909 - val_acc: 0.8189
Epoch 39/100
100/100 [==============================] - 18s 179ms/step - loss: 0.3772 - acc: 0.8302 - val_loss: 0.4405 - val_acc: 0.8077
Epoch 40/100
100/100 [==============================] - 20s 196ms/step - loss: 0.3732 - acc: 0.8295 - val_loss: 0.2996 - val_acc: 0.7848
Epoch 41/100
100/100 [==============================] - 18s 182ms/step - loss: 0.3588 - acc: 0.8352 - val_loss: 0.6172 - val_acc: 0.8015
Epoch 42/100
100/100 [==============================] - 18s 179ms/step - loss: 0.3540 - acc: 0.8453 - val_loss: 0.3003 - val_acc: 0.8052
Epoch 43/100
100/100 [==============================] - 18s 181ms/step - loss: 0.3658 - acc: 0.8339 - val_loss: 0.4270 - val_acc: 0.8189
Epoch 44/100
100/100 [==============================] - 18s 176ms/step - loss: 0.3583 - acc: 0.8475 - val_loss: 0.3365 - val_acc: 0.8154
Epoch 45/100
100/100 [==============================] - 19s 185ms/step - loss: 0.3520 - acc: 0.8436 - val_loss: 0.5580 - val_acc: 0.7880
Epoch 46/100
100/100 [==============================] - 18s 177ms/step - loss: 0.3502 - acc: 0.8444 - val_loss: 0.6109 - val_acc: 0.8115
Epoch 47/100
100/100 [==============================] - 18s 183ms/step - loss: 0.3570 - acc: 0.8433 - val_loss: 0.5985 - val_acc: 0.8048
Epoch 48/100
100/100 [==============================] - 18s 177ms/step - loss: 0.3421 - acc: 0.8485 - val_loss: 0.1831 - val_acc: 0.8170
Epoch 49/100
100/100 [==============================] - 18s 176ms/step - loss: 0.3567 - acc: 0.8425 - val_loss: 0.5128 - val_acc: 0.8160
Epoch 50/100
100/100 [==============================] - 18s 177ms/step - loss: 0.3472 - acc: 0.8442 - val_loss: 0.3123 - val_acc: 0.8086
Epoch 51/100
100/100 [==============================] - 18s 176ms/step - loss: 0.3454 - acc: 0.8419 - val_loss: 0.6317 - val_acc: 0.7849
Epoch 52/100
100/100 [==============================] - 19s 192ms/step - loss: 0.3446 - acc: 0.8412 - val_loss: 0.3880 - val_acc: 0.8318
Epoch 53/100
100/100 [==============================] - 18s 177ms/step - loss: 0.3375 - acc: 0.8514 - val_loss: 0.4956 - val_acc: 0.8306
Epoch 54/100
100/100 [==============================] - 18s 182ms/step - loss: 0.3473 - acc: 0.8428 - val_loss: 0.3659 - val_acc: 0.8228
Epoch 55/100
100/100 [==============================] - 18s 177ms/step - loss: 0.3462 - acc: 0.8450 - val_loss: 0.5181 - val_acc: 0.7912
Epoch 56/100
100/100 [==============================] - 18s 183ms/step - loss: 0.3390 - acc: 0.8524 - val_loss: 0.2566 - val_acc: 0.8183
Epoch 57/100
100/100 [==============================] - 19s 190ms/step - loss: 0.3430 - acc: 0.8562 - val_loss: 0.4474 - val_acc: 0.7713
Epoch 58/100
100/100 [==============================] - 18s 178ms/step - loss: 0.3216 - acc: 0.8583 - val_loss: 0.3771 - val_acc: 0.7893
Epoch 59/100
100/100 [==============================] - 18s 179ms/step - loss: 0.3428 - acc: 0.8477 - val_loss: 0.3559 - val_acc: 0.7990
Epoch 60/100
100/100 [==============================] - 18s 176ms/step - loss: 0.3341 - acc: 0.8493 - val_loss: 0.6045 - val_acc: 0.7532
Epoch 61/100
100/100 [==============================] - 18s 180ms/step - loss: 0.3209 - acc: 0.8618 - val_loss: 0.6191 - val_acc: 0.7899
Epoch 62/100
100/100 [==============================] - 18s 181ms/step - loss: 0.3337 - acc: 0.8587 - val_loss: 0.3722 - val_acc: 0.8052
Epoch 63/100
100/100 [==============================] - 18s 180ms/step - loss: 0.3292 - acc: 0.8570 - val_loss: 0.5298 - val_acc: 0.8254
Epoch 64/100
100/100 [==============================] - 18s 177ms/step - loss: 0.3243 - acc: 0.8599 - val_loss: 0.0747 - val_acc: 0.7848
Epoch 65/100
100/100 [==============================] - 18s 179ms/step - loss: 0.3310 - acc: 0.8576 - val_loss: 0.5133 - val_acc: 0.8287
Epoch 66/100
100/100 [==============================] - 18s 177ms/step - loss: 0.3367 - acc: 0.8573 - val_loss: 0.1955 - val_acc: 0.8260
Epoch 67/100
100/100 [==============================] - 18s 178ms/step - loss: 0.3163 - acc: 0.8543 - val_loss: 0.5202 - val_acc: 0.7754
Epoch 68/100
100/100 [==============================] - 18s 178ms/step - loss: 0.3079 - acc: 0.8655 - val_loss: 0.4557 - val_acc: 0.8338
Epoch 69/100
100/100 [==============================] - 19s 189ms/step - loss: 0.3308 - acc: 0.8543 - val_loss: 0.3304 - val_acc: 0.7900
Epoch 70/100
100/100 [==============================] - 18s 182ms/step - loss: 0.3077 - acc: 0.8696 - val_loss: 0.5216 - val_acc: 0.8235
Epoch 71/100
100/100 [==============================] - 18s 179ms/step - loss: 0.3132 - acc: 0.8549 - val_loss: 0.3791 - val_acc: 0.8166
Epoch 72/100
100/100 [==============================] - 18s 184ms/step - loss: 0.3113 - acc: 0.8658 - val_loss: 0.5609 - val_acc: 0.7822
Epoch 73/100
100/100 [==============================] - 18s 177ms/step - loss: 0.3337 - acc: 0.8570 - val_loss: 0.1964 - val_acc: 0.7957
Epoch 74/100
100/100 [==============================] - 18s 184ms/step - loss: 0.3031 - acc: 0.8653 - val_loss: 1.0364 - val_acc: 0.8242
Epoch 75/100
100/100 [==============================] - 18s 180ms/step - loss: 0.3076 - acc: 0.8681 - val_loss: 0.1994 - val_acc: 0.8125
Epoch 76/100
100/100 [==============================] - 18s 180ms/step - loss: 0.2923 - acc: 0.8769 - val_loss: 0.3370 - val_acc: 0.8331
Epoch 77/100
100/100 [==============================] - 18s 182ms/step - loss: 0.3107 - acc: 0.8602 - val_loss: 0.4574 - val_acc: 0.8318
Epoch 78/100
100/100 [==============================] - 18s 178ms/step - loss: 0.3023 - acc: 0.8763 - val_loss: 1.3822 - val_acc: 0.7614
Epoch 79/100
100/100 [==============================] - 18s 184ms/step - loss: 0.2998 - acc: 0.8712 - val_loss: 0.7947 - val_acc: 0.8164
Epoch 80/100
100/100 [==============================] - 18s 176ms/step - loss: 0.3010 - acc: 0.8608 - val_loss: 0.5776 - val_acc: 0.7919
Epoch 81/100
100/100 [==============================] - 19s 194ms/step - loss: 0.2896 - acc: 0.8737 - val_loss: 0.3232 - val_acc: 0.8192
Epoch 82/100
100/100 [==============================] - 18s 177ms/step - loss: 0.2905 - acc: 0.8778 - val_loss: 0.2846 - val_acc: 0.8286
Epoch 83/100
100/100 [==============================] - 18s 177ms/step - loss: 0.2732 - acc: 0.8867 - val_loss: 0.5820 - val_acc: 0.8261
Epoch 84/100
100/100 [==============================] - 18s 180ms/step - loss: 0.2892 - acc: 0.8741 - val_loss: 0.5068 - val_acc: 0.8595
Epoch 85/100
100/100 [==============================] - 18s 176ms/step - loss: 0.2766 - acc: 0.8766 - val_loss: 0.1404 - val_acc: 0.8077
Epoch 86/100
100/100 [==============================] - 19s 193ms/step - loss: 0.2949 - acc: 0.8684 - val_loss: 0.3507 - val_acc: 0.8299
Epoch 87/100
100/100 [==============================] - 18s 178ms/step - loss: 0.2938 - acc: 0.8769 - val_loss: 0.2937 - val_acc: 0.8445
Epoch 88/100
100/100 [==============================] - 18s 183ms/step - loss: 0.2800 - acc: 0.8854 - val_loss: 0.5988 - val_acc: 0.7635
Epoch 89/100
100/100 [==============================] - 18s 179ms/step - loss: 0.2837 - acc: 0.8737 - val_loss: 0.4927 - val_acc: 0.8280
Epoch 90/100
100/100 [==============================] - 18s 176ms/step - loss: 0.2846 - acc: 0.8797 - val_loss: 0.2400 - val_acc: 0.8376
Epoch 91/100
100/100 [==============================] - 19s 185ms/step - loss: 0.2993 - acc: 0.8665 - val_loss: 0.3846 - val_acc: 0.8196
Epoch 92/100
100/100 [==============================] - 18s 177ms/step - loss: 0.2745 - acc: 0.8778 - val_loss: 0.3000 - val_acc: 0.7957
Epoch 93/100
100/100 [==============================] - 18s 179ms/step - loss: 0.2768 - acc: 0.8794 - val_loss: 0.6378 - val_acc: 0.8344
Epoch 94/100
100/100 [==============================] - 18s 178ms/step - loss: 0.2699 - acc: 0.8873 - val_loss: 0.3398 - val_acc: 0.8401
Epoch 95/100
100/100 [==============================] - 18s 183ms/step - loss: 0.2717 - acc: 0.8800 - val_loss: 0.3738 - val_acc: 0.8254
Epoch 96/100
100/100 [==============================] - 18s 176ms/step - loss: 0.2813 - acc: 0.8813 - val_loss: 0.3001 - val_acc: 0.8228
Epoch 97/100
100/100 [==============================] - 18s 177ms/step - loss: 0.2649 - acc: 0.8891 - val_loss: 0.3843 - val_acc: 0.8166
Epoch 98/100
100/100 [==============================] - 19s 194ms/step - loss: 0.2768 - acc: 0.8841 - val_loss: 0.3967 - val_acc: 0.8305
Epoch 99/100
100/100 [==============================] - 18s 178ms/step - loss: 0.2714 - acc: 0.8838 - val_loss: 0.4264 - val_acc: 0.8217
Epoch 100/100
100/100 [==============================] - 18s 178ms/step - loss: 0.2602 - acc: 0.8924 - val_loss: 0.1083 - val_acc: 0.8299
In [34]:
model.save('cats_and_dogs_small_2.h5')
In [35]:
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1,len(acc)+1)

plt.plot(epochs,acc,'bo',label = "Training acc")
plt.plot(epochs,val_acc,'b',label = "Validation acc")
plt.title('Training and validation accuracy')
plt.legend()

plt.figure()
plt.plot(epochs,loss,'bo',label = "Training loss")
plt.plot(epochs,val_loss,'b',label = "Validation loss")
plt.title('Training and validation loss')
plt.legend()

plt.show()

5. 使用预训练的卷积神经网络

In [39]:
from keras.applications import VGG16

conv_base = VGG16(weights = 'imagenet',
                 include_top = False,
                 input_shape = (150,150,3))
Downloading data from https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5
58892288/58889256 [==============================] - 545s 9us/step

使用VGG16的输出作为特征提取

In [41]:
from keras import models
from keras import layers

model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256,activation = 'relu'))
model.add(layers.Dense(1,activation = 'sigmoid'))
In [42]:
model.summary()
Model: "sequential_3"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
vgg16 (Model)                (None, 4, 4, 512)         14714688  
_________________________________________________________________
flatten_3 (Flatten)          (None, 8192)              0         
_________________________________________________________________
dense_5 (Dense)              (None, 256)               2097408   
_________________________________________________________________
dense_6 (Dense)              (None, 1)                 257       
=================================================================
Total params: 16,812,353
Trainable params: 16,812,353
Non-trainable params: 0
_________________________________________________________________
In [43]:
## 冻结conv_base的参数
conv_base.trainable = False
In [45]:
train_datagen = ImageDataGenerator(rescale=1./255,
                                  rotation_range=40,
                                  width_shift_range=0.2,
                                  height_shift_range=0.2,
                                  shear_range=0.2,
                                  zoom_range=0.2,
                                  horizontal_flip=True,
                                  fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255) ##测试集不增强

train_generator = train_datagen.flow_from_directory(train_dir,
                                                   target_size=(150,150),
                                                   batch_size=20,
                                                   class_mode='binary')
validation_generator = test_datagen.flow_from_directory(validation_dir,
                                                   target_size=(150,150),
                                                   batch_size=20,
                                                   class_mode='binary')
model.compile(loss = 'binary_crossentropy',
             optimizer=optimizers.RMSprop(lr = 1e-5),
             metrics = ['acc'])

history = model.fit_generator(train_generator,
                             steps_per_epoch=100,
                             epochs=30,
                             validation_data=validation_generator,
                             validation_steps=50)
Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
Epoch 1/30
100/100 [==============================] - 16s 160ms/step - loss: 0.6412 - acc: 0.6315 - val_loss: 0.5159 - val_acc: 0.7390
Epoch 2/30
100/100 [==============================] - 15s 148ms/step - loss: 0.5485 - acc: 0.7480 - val_loss: 0.4838 - val_acc: 0.8040
Epoch 3/30
100/100 [==============================] - 15s 150ms/step - loss: 0.4967 - acc: 0.7885 - val_loss: 0.3362 - val_acc: 0.8320
Epoch 4/30
100/100 [==============================] - 15s 147ms/step - loss: 0.4549 - acc: 0.8045 - val_loss: 0.3201 - val_acc: 0.8670
Epoch 5/30
100/100 [==============================] - 15s 152ms/step - loss: 0.4347 - acc: 0.8060 - val_loss: 0.4302 - val_acc: 0.8730
Epoch 6/30
100/100 [==============================] - 15s 151ms/step - loss: 0.4242 - acc: 0.8110 - val_loss: 0.4222 - val_acc: 0.8720
Epoch 7/30
100/100 [==============================] - 15s 148ms/step - loss: 0.4012 - acc: 0.8320 - val_loss: 0.2640 - val_acc: 0.8810
Epoch 8/30
100/100 [==============================] - 15s 147ms/step - loss: 0.3917 - acc: 0.8250 - val_loss: 0.4245 - val_acc: 0.8820
Epoch 9/30
100/100 [==============================] - 15s 148ms/step - loss: 0.3842 - acc: 0.8320 - val_loss: 0.2971 - val_acc: 0.8870
Epoch 10/30
100/100 [==============================] - 15s 147ms/step - loss: 0.3827 - acc: 0.8295 - val_loss: 0.3741 - val_acc: 0.8900
Epoch 11/30
100/100 [==============================] - 15s 148ms/step - loss: 0.3589 - acc: 0.8485 - val_loss: 0.3988 - val_acc: 0.8890
Epoch 12/30
100/100 [==============================] - 15s 149ms/step - loss: 0.3536 - acc: 0.8525 - val_loss: 0.2365 - val_acc: 0.8920
Epoch 13/30
100/100 [==============================] - 15s 147ms/step - loss: 0.3480 - acc: 0.8510 - val_loss: 0.2776 - val_acc: 0.8890
Epoch 14/30
100/100 [==============================] - 15s 147ms/step - loss: 0.3473 - acc: 0.8415 - val_loss: 0.2172 - val_acc: 0.8940
Epoch 15/30
100/100 [==============================] - 15s 147ms/step - loss: 0.3405 - acc: 0.8615 - val_loss: 0.2620 - val_acc: 0.8910
Epoch 16/30
100/100 [==============================] - 15s 146ms/step - loss: 0.3524 - acc: 0.8365 - val_loss: 0.5111 - val_acc: 0.8940
Epoch 17/30
100/100 [==============================] - 15s 147ms/step - loss: 0.3389 - acc: 0.8495 - val_loss: 0.1789 - val_acc: 0.8990
Epoch 18/30
100/100 [==============================] - 15s 148ms/step - loss: 0.3457 - acc: 0.8500 - val_loss: 0.3568 - val_acc: 0.8970
Epoch 19/30
100/100 [==============================] - 15s 147ms/step - loss: 0.3319 - acc: 0.8575 - val_loss: 0.2385 - val_acc: 0.8970
Epoch 20/30
100/100 [==============================] - 15s 148ms/step - loss: 0.3220 - acc: 0.8640 - val_loss: 0.5159 - val_acc: 0.8960
Epoch 21/30
100/100 [==============================] - 15s 146ms/step - loss: 0.3221 - acc: 0.8605 - val_loss: 0.2260 - val_acc: 0.9000
Epoch 22/30
100/100 [==============================] - 15s 149ms/step - loss: 0.3264 - acc: 0.8555 - val_loss: 0.1481 - val_acc: 0.9000
Epoch 23/30
100/100 [==============================] - 15s 148ms/step - loss: 0.3243 - acc: 0.8635 - val_loss: 0.3885 - val_acc: 0.9000
Epoch 24/30
100/100 [==============================] - 15s 147ms/step - loss: 0.3089 - acc: 0.8695 - val_loss: 0.3450 - val_acc: 0.9010
Epoch 25/30
100/100 [==============================] - 15s 146ms/step - loss: 0.3116 - acc: 0.8715 - val_loss: 0.3326 - val_acc: 0.9010
Epoch 26/30
100/100 [==============================] - 15s 146ms/step - loss: 0.3123 - acc: 0.8725 - val_loss: 0.2171 - val_acc: 0.9040
Epoch 27/30
100/100 [==============================] - 15s 148ms/step - loss: 0.3109 - acc: 0.8710 - val_loss: 0.2963 - val_acc: 0.9030
Epoch 28/30
100/100 [==============================] - 15s 148ms/step - loss: 0.2996 - acc: 0.8725 - val_loss: 0.3507 - val_acc: 0.9050
Epoch 29/30
100/100 [==============================] - 14s 145ms/step - loss: 0.3136 - acc: 0.8615 - val_loss: 0.1105 - val_acc: 0.9060
Epoch 30/30
100/100 [==============================] - 15s 147ms/step - loss: 0.3067 - acc: 0.8695 - val_loss: 0.2895 - val_acc: 0.9030
In [46]:
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1,len(acc)+1)

plt.plot(epochs,acc,'bo',label = "Training acc")
plt.plot(epochs,val_acc,'b',label = "Validation acc")
plt.title('Training and validation accuracy')
plt.legend()

plt.figure()
plt.plot(epochs,loss,'bo',label = "Training loss")
plt.plot(epochs,val_loss,'b',label = "Validation loss")
plt.title('Training and validation loss')
plt.legend()

plt.show()

微调模型,微调conv_base的后三个卷积层

联合上面的训练,再执行如下操作: 解冻基网络的一部分,联合训练解冻的这些层和添加的部分。

In [ ]:
conv_base.trainable = True
set_trainable = False
for layer in conv_base.layers():
    if layer.name == 'block5_conv1':
        set_trainable = True
    if set_trainable:
        layer.trainable = True
    else:
        layer.trainable = False
In [47]:
model.compile(loss = 'binary_crossentropy',
             optimizer=optimizers.RMSprop(lr = 1e-5),
             metrics = ['acc'])

history = model.fit_generator(train_generator,
                             steps_per_epoch=100,
                             epochs=100,
                             validation_data=validation_generator,
                             validation_steps=50)
Epoch 1/100
100/100 [==============================] - 16s 160ms/step - loss: 0.2961 - acc: 0.8655 - val_loss: 0.2628 - val_acc: 0.9030
Epoch 2/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2967 - acc: 0.8805 - val_loss: 0.2746 - val_acc: 0.9080
Epoch 3/100
100/100 [==============================] - 15s 148ms/step - loss: 0.3140 - acc: 0.8640 - val_loss: 0.2906 - val_acc: 0.8980
Epoch 4/100
100/100 [==============================] - 15s 150ms/step - loss: 0.3198 - acc: 0.8580 - val_loss: 0.1632 - val_acc: 0.9000
Epoch 5/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2960 - acc: 0.8655 - val_loss: 0.1849 - val_acc: 0.9060
Epoch 6/100
100/100 [==============================] - 15s 147ms/step - loss: 0.3019 - acc: 0.8710 - val_loss: 0.2188 - val_acc: 0.9070
Epoch 7/100
100/100 [==============================] - 15s 153ms/step - loss: 0.3051 - acc: 0.8725 - val_loss: 0.3292 - val_acc: 0.9060
Epoch 8/100
100/100 [==============================] - 15s 149ms/step - loss: 0.2931 - acc: 0.8805 - val_loss: 0.3204 - val_acc: 0.9030
Epoch 9/100
100/100 [==============================] - 15s 148ms/step - loss: 0.3047 - acc: 0.8685 - val_loss: 0.1118 - val_acc: 0.9000
Epoch 10/100
100/100 [==============================] - 15s 150ms/step - loss: 0.2861 - acc: 0.8860 - val_loss: 0.1412 - val_acc: 0.9020
Epoch 11/100
100/100 [==============================] - 15s 151ms/step - loss: 0.3092 - acc: 0.8680 - val_loss: 0.4090 - val_acc: 0.9040
Epoch 12/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2936 - acc: 0.8685 - val_loss: 0.3495 - val_acc: 0.9030
Epoch 13/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2960 - acc: 0.8730 - val_loss: 0.0613 - val_acc: 0.9070
Epoch 14/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2789 - acc: 0.8780 - val_loss: 0.2803 - val_acc: 0.9090
Epoch 15/100
100/100 [==============================] - 15s 149ms/step - loss: 0.2954 - acc: 0.8730 - val_loss: 0.2093 - val_acc: 0.9090
Epoch 16/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2857 - acc: 0.8810 - val_loss: 0.2477 - val_acc: 0.9060
Epoch 17/100
100/100 [==============================] - 15s 149ms/step - loss: 0.2812 - acc: 0.8805 - val_loss: 0.1987 - val_acc: 0.9090
Epoch 18/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2841 - acc: 0.8785 - val_loss: 0.2082 - val_acc: 0.9090
Epoch 19/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2803 - acc: 0.8860 - val_loss: 0.4432 - val_acc: 0.8940
Epoch 20/100
100/100 [==============================] - 15s 149ms/step - loss: 0.2856 - acc: 0.8860 - val_loss: 0.4499 - val_acc: 0.9010
Epoch 21/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2843 - acc: 0.8820 - val_loss: 0.2528 - val_acc: 0.8940
Epoch 22/100
100/100 [==============================] - 15s 146ms/step - loss: 0.2858 - acc: 0.8715 - val_loss: 0.2970 - val_acc: 0.9030
Epoch 23/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2732 - acc: 0.8810 - val_loss: 0.3680 - val_acc: 0.9020
Epoch 24/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2841 - acc: 0.8795 - val_loss: 0.3954 - val_acc: 0.8960
Epoch 25/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2712 - acc: 0.8840 - val_loss: 0.1395 - val_acc: 0.9020
Epoch 26/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2688 - acc: 0.8830 - val_loss: 0.2907 - val_acc: 0.9080
Epoch 27/100
100/100 [==============================] - 15s 149ms/step - loss: 0.2777 - acc: 0.8795 - val_loss: 0.2820 - val_acc: 0.9040
Epoch 28/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2804 - acc: 0.8810 - val_loss: 0.1431 - val_acc: 0.9070
Epoch 29/100
100/100 [==============================] - 15s 151ms/step - loss: 0.2664 - acc: 0.8925 - val_loss: 0.1372 - val_acc: 0.8990
Epoch 30/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2716 - acc: 0.8820 - val_loss: 0.1181 - val_acc: 0.8990
Epoch 31/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2779 - acc: 0.8850 - val_loss: 0.0598 - val_acc: 0.9020
Epoch 32/100
100/100 [==============================] - 15s 149ms/step - loss: 0.2742 - acc: 0.8830 - val_loss: 0.3579 - val_acc: 0.9090
Epoch 33/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2762 - acc: 0.8840 - val_loss: 0.1587 - val_acc: 0.9040
Epoch 34/100
100/100 [==============================] - 15s 149ms/step - loss: 0.2637 - acc: 0.8930 - val_loss: 0.1521 - val_acc: 0.9020
Epoch 35/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2726 - acc: 0.8780 - val_loss: 0.2204 - val_acc: 0.9120
Epoch 36/100
100/100 [==============================] - 15s 146ms/step - loss: 0.2683 - acc: 0.8840 - val_loss: 0.1611 - val_acc: 0.9030
Epoch 37/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2745 - acc: 0.8795 - val_loss: 0.1337 - val_acc: 0.9060
Epoch 38/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2746 - acc: 0.8810 - val_loss: 0.2129 - val_acc: 0.8950
Epoch 39/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2632 - acc: 0.8860 - val_loss: 0.0721 - val_acc: 0.9040
Epoch 40/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2770 - acc: 0.8775 - val_loss: 0.2327 - val_acc: 0.9080
Epoch 41/100
100/100 [==============================] - 15s 146ms/step - loss: 0.2816 - acc: 0.8760 - val_loss: 0.2703 - val_acc: 0.9130
Epoch 42/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2659 - acc: 0.8880 - val_loss: 0.3029 - val_acc: 0.9110
Epoch 43/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2774 - acc: 0.8705 - val_loss: 0.3498 - val_acc: 0.9100
Epoch 44/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2804 - acc: 0.8830 - val_loss: 0.0769 - val_acc: 0.9120
Epoch 45/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2691 - acc: 0.8810 - val_loss: 0.0767 - val_acc: 0.9050
Epoch 46/100
100/100 [==============================] - 15s 150ms/step - loss: 0.2799 - acc: 0.8820 - val_loss: 0.1754 - val_acc: 0.9030
Epoch 47/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2561 - acc: 0.8975 - val_loss: 0.4715 - val_acc: 0.9100
Epoch 48/100
100/100 [==============================] - 15s 150ms/step - loss: 0.2526 - acc: 0.8950 - val_loss: 0.2207 - val_acc: 0.8990
Epoch 49/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2607 - acc: 0.8925 - val_loss: 0.2534 - val_acc: 0.9020
Epoch 50/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2641 - acc: 0.8895 - val_loss: 0.1784 - val_acc: 0.9000
Epoch 51/100
100/100 [==============================] - 15s 146ms/step - loss: 0.2560 - acc: 0.8895 - val_loss: 0.2203 - val_acc: 0.9050
Epoch 52/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2639 - acc: 0.8805 - val_loss: 0.1354 - val_acc: 0.9050
Epoch 53/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2571 - acc: 0.8950 - val_loss: 0.3110 - val_acc: 0.9080
Epoch 54/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2560 - acc: 0.8920 - val_loss: 0.2549 - val_acc: 0.9060
Epoch 55/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2664 - acc: 0.8805 - val_loss: 0.2932 - val_acc: 0.9060
Epoch 56/100
100/100 [==============================] - 15s 149ms/step - loss: 0.2639 - acc: 0.8880 - val_loss: 0.1019 - val_acc: 0.9050
Epoch 57/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2571 - acc: 0.8930 - val_loss: 0.4840 - val_acc: 0.9080
Epoch 58/100
100/100 [==============================] - 15s 149ms/step - loss: 0.2559 - acc: 0.8910 - val_loss: 0.1199 - val_acc: 0.9070
Epoch 59/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2623 - acc: 0.8910 - val_loss: 0.2438 - val_acc: 0.9090
Epoch 60/100
100/100 [==============================] - 15s 146ms/step - loss: 0.2707 - acc: 0.8835 - val_loss: 0.1427 - val_acc: 0.9080
Epoch 61/100
100/100 [==============================] - 15s 152ms/step - loss: 0.2589 - acc: 0.8880 - val_loss: 0.2479 - val_acc: 0.9060
Epoch 62/100
100/100 [==============================] - 15s 150ms/step - loss: 0.2612 - acc: 0.8800 - val_loss: 0.3939 - val_acc: 0.9080
Epoch 63/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2568 - acc: 0.8960 - val_loss: 0.2937 - val_acc: 0.9070
Epoch 64/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2406 - acc: 0.8975 - val_loss: 0.1620 - val_acc: 0.9080
Epoch 65/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2580 - acc: 0.8860 - val_loss: 0.3719 - val_acc: 0.9020
Epoch 66/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2535 - acc: 0.8905 - val_loss: 0.1267 - val_acc: 0.9030
Epoch 67/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2690 - acc: 0.8925 - val_loss: 0.3250 - val_acc: 0.9070
Epoch 68/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2623 - acc: 0.8875 - val_loss: 0.5563 - val_acc: 0.9030
Epoch 69/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2483 - acc: 0.8910 - val_loss: 0.5614 - val_acc: 0.9050
Epoch 70/100
100/100 [==============================] - 15s 146ms/step - loss: 0.2681 - acc: 0.8810 - val_loss: 0.2766 - val_acc: 0.9050
Epoch 71/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2641 - acc: 0.8865 - val_loss: 0.1637 - val_acc: 0.9050
Epoch 72/100
100/100 [==============================] - 15s 153ms/step - loss: 0.2523 - acc: 0.8915 - val_loss: 0.1735 - val_acc: 0.9080
Epoch 73/100
100/100 [==============================] - 15s 149ms/step - loss: 0.2501 - acc: 0.8930 - val_loss: 0.3382 - val_acc: 0.9050
Epoch 74/100
100/100 [==============================] - 15s 146ms/step - loss: 0.2525 - acc: 0.8855 - val_loss: 0.1594 - val_acc: 0.9000
Epoch 75/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2597 - acc: 0.8885 - val_loss: 0.1994 - val_acc: 0.9050
Epoch 76/100
100/100 [==============================] - 15s 146ms/step - loss: 0.2615 - acc: 0.8800 - val_loss: 0.1481 - val_acc: 0.9070
Epoch 77/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2499 - acc: 0.8925 - val_loss: 0.4782 - val_acc: 0.9060
Epoch 78/100
100/100 [==============================] - 15s 146ms/step - loss: 0.2483 - acc: 0.8900 - val_loss: 0.3465 - val_acc: 0.9050
Epoch 79/100
100/100 [==============================] - 15s 146ms/step - loss: 0.2484 - acc: 0.8930 - val_loss: 0.4481 - val_acc: 0.9030
Epoch 80/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2568 - acc: 0.8905 - val_loss: 0.2582 - val_acc: 0.9020
Epoch 81/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2550 - acc: 0.8935 - val_loss: 0.0705 - val_acc: 0.9040
Epoch 82/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2566 - acc: 0.8880 - val_loss: 0.0905 - val_acc: 0.9060
Epoch 83/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2412 - acc: 0.9050 - val_loss: 0.1271 - val_acc: 0.9040
Epoch 84/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2522 - acc: 0.8955 - val_loss: 0.1344 - val_acc: 0.9050
Epoch 85/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2570 - acc: 0.8940 - val_loss: 0.0485 - val_acc: 0.9020
Epoch 86/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2490 - acc: 0.8960 - val_loss: 0.1390 - val_acc: 0.9070
Epoch 87/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2566 - acc: 0.8955 - val_loss: 0.2096 - val_acc: 0.9080
Epoch 88/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2505 - acc: 0.8975 - val_loss: 0.1365 - val_acc: 0.9040
Epoch 89/100
100/100 [==============================] - 15s 146ms/step - loss: 0.2489 - acc: 0.8970 - val_loss: 0.0532 - val_acc: 0.9020
Epoch 90/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2485 - acc: 0.8970 - val_loss: 0.3333 - val_acc: 0.9070
Epoch 91/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2441 - acc: 0.9000 - val_loss: 0.3434 - val_acc: 0.9110
Epoch 92/100
100/100 [==============================] - 15s 146ms/step - loss: 0.2430 - acc: 0.8980 - val_loss: 0.1281 - val_acc: 0.9070
Epoch 93/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2533 - acc: 0.8880 - val_loss: 0.1946 - val_acc: 0.9090
Epoch 94/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2468 - acc: 0.8950 - val_loss: 0.2494 - val_acc: 0.9090
Epoch 95/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2480 - acc: 0.8955 - val_loss: 0.3694 - val_acc: 0.9050
Epoch 96/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2445 - acc: 0.8950 - val_loss: 0.0818 - val_acc: 0.9080
Epoch 97/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2441 - acc: 0.8995 - val_loss: 0.1498 - val_acc: 0.9060
Epoch 98/100
100/100 [==============================] - 15s 147ms/step - loss: 0.2400 - acc: 0.8930 - val_loss: 0.3812 - val_acc: 0.9050
Epoch 99/100
100/100 [==============================] - 15s 148ms/step - loss: 0.2335 - acc: 0.9040 - val_loss: 0.3991 - val_acc: 0.9070
Epoch 100/100
100/100 [==============================] - 15s 149ms/step - loss: 0.2508 - acc: 0.9005 - val_loss: 0.3675 - val_acc: 0.9100
In [48]:
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1,len(acc)+1)

plt.plot(epochs,acc,'bo',label = "Training acc")
plt.plot(epochs,val_acc,'b',label = "Validation acc")
plt.title('Training and validation accuracy')
plt.legend()

plt.figure()
plt.plot(epochs,loss,'bo',label = "Training loss")
plt.plot(epochs,val_loss,'b',label = "Validation loss")
plt.title('Training and validation loss')
plt.legend()

plt.show()

6. 在测试集上测试精度

In [49]:
test_generator = test_datagen.flow_from_directory(test_dir,
                                                 target_size=(150,150),
                                                 batch_size=20,
                                                 class_mode='binary')

test_loss,test_acc = model.evaluate_generator(test_generator,steps=50)
print('test acc:',test_acc)
Found 1000 images belonging to 2 classes.
test acc: 0.906000018119812