አዲስ ታሪክ

የ Keras እና የኮድ ጥቂት መስመሮች ጋር እንዴት አንድ መሐንዲት ለመፍጠር

በጣም ረጅም፤ ማንበብ

ይህ መመሪያ በ Keras Functional API በመጠቀም በሽታ, ግምገማዎች, ግምገማዎች, እና ከፍተኛ ግራፍ ዲዛይን ያካትታል.
featured image - የ Keras እና የኮድ ጥቂት መስመሮች ጋር እንዴት አንድ መሐንዲት ለመፍጠር
Tensor Flow - [Technical Documentation] HackerNoon profile picture
0-item

አጠቃላይ ግምገማ

  • አግኙን
  • አግኙን
  • ስልጠና, ግምገማ እና ግምገማ
  • ያግኙ እና serialize
  • ተመሳሳይ ግራፊዎችን በመጠቀም ከሌሎች ሞዴሎች ለመፍጠር
  • ሁሉም ሞዴሎች a callable, እንደ layers
  • የኮምፒውተር የኮምፒውተር የኮምፒውተር
  • ብዙ input እና outputs ጋር ሞዴሎች
  • አንድ ተስማሚ ሞዴል

አግኙን

import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras import layers

አግኙን

ካርታዎችአጠቃቀም APIይህ ሞዴሎች ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ነው.keras.Sequentialየ functional API በ non-linear topology, shared layers, and even multiple inputs or outputs ጋር ሞዴሎች መጠቀም ይችላሉ.

የ Deep Learning ሞዴል አብዛኛውን ጊዜ የካይሲክ ግራፍ (DAG) ነው.ግራፊዎች.

ቀጣይ ሞዴል ይመልከቱ:

[Dense (64 ዩኒት, relu activation)] [Dense (64 ዩኒት, relu activation]] [Dense (10 ዩኒት, softmax activation]]] (የተግበሪያ: በ 10 ደረጃዎች ላይ አንድ ትክክለኛነት ቅርጸት logits)

ይህ የ 3 ክፍሎች ጋር አንድ ዋና ግራፊክ ነው. ይህ ሞዴል በመጠቀም ተግባራዊ API በመጠቀም ለመፍጠር, አንድ input node ለመፍጠር ይጀምራል:


inputs = keras.Input(shape=(784,))

ውሂብ ቅርጸት በ 784 ዲሜትር ቪክቶር እንደ ይመዝገቡ. የ batch መጠን ሁልጊዜ ይመዝገቡ ምክንያቱም ብቻ እያንዳንዱ ሞዴል ቅርጸት ይመዝገቡ.

ለምሳሌ, እርስዎ አንድ ቅርጸት ጋር አንድ ፎቶ መውሰድ ከሆነ(32, 32, 3)እርስዎ ይጠቀማል:


# Just for demonstration purposes.
img_inputs = keras.Input(shape=(32, 32, 3))

The inputsያግኙን ያግኙን ያግኙን መረጃ እናdtypeየእርስዎን ሞዴል ላይ ያግኙን ውሂብ ውሂብ. እዚህ ቅርጽ ነው:


inputs.shape


TensorShape([None, 784])

እዚህ የ Dtype ነው:


inputs.dtype


tf.float32

አንተ ይህ ግራፍ ላይ አንድ ግራፍ ይደውሉ.You create a new node in the graph of layers by calling a layer on thisinputs object:


dense = layers.Dense(64, activation="relu")
x = dense(inputs)

የ "Layer Call" ተግባር ከ "Inputs" ወደ አንተ የተቋቋመ የ "Layer Call" ተግባር ተመሳሳይ ነው.dense layer, and you get xምርት እንደ.

ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ ከባድ


x = layers.Dense(64, activation="relu")(x)
outputs = layers.Dense(10)(x)

በዚህ ጊዜ, አንተ አንድModelየእርስዎን መሳሪያዎች እና መሳሪያዎች በፕላስቲክ መሳሪያዎች ላይ ይመልከቱ:


model = keras.Model(inputs=inputs, outputs=outputs, name="mnist_model")

ይመልከቱ ሞዴል ጥቅል እንዴት ይመልከቱ:


model.summary()


Model: "mnist_model"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_1 (InputLayer)        [(None, 784)]             0         
                                                                 
 dense (Dense)               (None, 64)                50240     
                                                                 
 dense_1 (Dense)             (None, 64)                4160      
                                                                 
 dense_2 (Dense)             (None, 10)                650       
                                                                 
=================================================================
Total params: 55050 (215.04 KB)
Trainable params: 55050 (215.04 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

በተጨማሪም አንድ ግራፍ እንደ ሞዴል መፍጠር ይችላሉ:


keras.utils.plot_model(model, "my_first_model.png")

እና, አማራጭ, በእያንዳንዱ ጥቅል ውስጥ ውሂብ እና ውሂብ ቅርጸት በኮምፒውተር ግራፊክ ላይ ይመልከቱ:


keras.utils.plot_model(model, "my_first_model_with_shape_info.png", show_shapes=True)

ይህ መጠን እና ኮድ በአጠቃላይ ተመሳሳይ ናቸው. በኮድ ስሪት, የኮንክሪንግ ጫማዎች በኮምፒውተር አጠቃቀም ይቀላቀሉ.

አንድ "የካሬዎች ግራፍ" አንድ ጥልቅ መግዛት ሞዴል ለ ተስማሚ ደህንነት ፎቶ ነው, እና ተግባራዊ API ይህ ተስማሚ ሞዴሎች ለመፍጠር አንድ መንገድ ነው.

ልምድ, ግምገማ እና ግምገማ

Training, evaluation, and inference works precisely in the same way for models built using the functional API as forSequentialሞዴሎች

The Modelየኮምፒውተር የኮምፒውተር የኮምፒውተር የኮምፒውተር (fit()የክፍያ መሳሪያዎች እና የክፍያ መሳሪያዎች (የevaluate()መውሰድ: እርስዎ ቀላል ይችላሉእነዚህን ሉህ ያግኙየክፍያ መሳሪያዎች (እንዴት: የክፍያ መሳሪያዎች)ጓንዶችአግኙን

እዚህ, የ MNIST ፎቶ ውሂብ ይሸፍናል, ወደ ቪክቶሮች ይመዝገቡ, የ ሞዴል በዚህ ውሂብ ላይ ይሸፍናል (በመሠረተ ስኬታማነት ላይ አፈጻጸም ማረጋገጫ ጊዜ), ከዚያም የሞዴል በ የሙከራ ውሂብ ላይ ይመዝገቡ:


(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255

model.compile(
    loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
    optimizer=keras.optimizers.RMSprop(),
    metrics=[keras.metrics.SparseCategoricalAccuracy()],
)

history = model.fit(x_train, y_train, batch_size=64, epochs=2, validation_split=0.2)

test_scores = model.evaluate(x_test, y_test, verbose=2)
print("Test loss:", test_scores[0])
print("Test accuracy:", test_scores[1])


Epoch 1/2
750/750 [==============================] - 4s 3ms/step - loss: 0.3556 - sparse_categorical_accuracy: 0.8971 - val_loss: 0.1962 - val_sparse_categorical_accuracy: 0.9422
Epoch 2/2
750/750 [==============================] - 2s 2ms/step - loss: 0.1612 - sparse_categorical_accuracy: 0.9527 - val_loss: 0.1461 - val_sparse_categorical_accuracy: 0.9592
313/313 - 0s - loss: 0.1492 - sparse_categorical_accuracy: 0.9556 - 463ms/epoch - 1ms/step
Test loss: 0.14915992319583893
Test accuracy: 0.9556000232696533

ተጨማሪ ያንብቡ, ይመልከቱስልጠና እና ግምገማመመሪያዎች

ያግኙ እና serialize

ሞዴል ማከማቻ እና serialization በተመሳሳይ መንገድ ለ ሞዴሎች በመጠቀም የተገነባው ተግባራዊ API ጋር ይሰራሉ.Sequentialሞዴሎች. አንድ ተግባራዊ ሞዴል ለመፍጠር መደበኛ መንገድ ይደውሉmodel.save()ከዚህ ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ከሁሉም ፋይሎችን ያቀርባል.

ይህ የተጠበቀ ፋይል ያካትታል:

  • ሞዴል Architecture
  • ሞዴል ክብደት መጠን (የተወደበት ጊዜ የተመሠረተ)
  • ሞዴል training config, if any (as passed to compile)
  • optimizer እና በውስጡ ሁኔታ, ማንኛውም (እርስዎ የተቀየደው የትምህርት ለመጀመር)


model.save("path_to_my_model.keras")
del model
# Recreate the exact same model purely from the file:
model = keras.models.load_model("path_to_my_model.keras")

ዝርዝር ለማግኘት, ሞዴል ያንብቡserialization እና ደህንነትመመሪያዎች

ተመሳሳይ ግራፊዎችን በመጠቀም ከሌሎች ሞዴሎች ለመፍጠር

በ functional API ውስጥ, ሞዴሎች በ layer graph ውስጥ የእርስዎ input እና outputs ን ያካትታሉ. ይህ አንድ አንድ layer graph በመጠቀም በርካታ ሞዴሎች ለመፍጠር ሊውል ይችላል.

በዚህ ምሳሌ ውስጥ, ሁለት ሞዴሎች ለማስተካከል ተመሳሳይ ጥቅሎች ይጠቀማሉ: አንድencoderይህ ሞዴል በ 16 ዲሜትር ቪክቶሮች ውስጥ ምስል መተግበሪያዎችን ይፈጥራል, እና አንድ end-to-end ሞዴልautoencoderየፈጠራ ሞዴሎች.


encoder_input = keras.Input(shape=(28, 28, 1), name="img")
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
encoder_output = layers.GlobalMaxPooling2D()(x)

encoder = keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()

x = layers.Reshape((4, 4, 1))(encoder_output)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)

autoencoder = keras.Model(encoder_input, decoder_output, name="autoencoder")
autoencoder.summary()


Model: "encoder"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 img (InputLayer)            [(None, 28, 28, 1)]       0         
                                                                 
 conv2d (Conv2D)             (None, 26, 26, 16)        160       
                                                                 
 conv2d_1 (Conv2D)           (None, 24, 24, 32)        4640      
                                                                 
 max_pooling2d (MaxPooling2  (None, 8, 8, 32)          0         
 D)                                                              
                                                                 
 conv2d_2 (Conv2D)           (None, 6, 6, 32)          9248      
                                                                 
 conv2d_3 (Conv2D)           (None, 4, 4, 16)          4624      
                                                                 
 global_max_pooling2d (Glob  (None, 16)                0         
 alMaxPooling2D)                                                 
                                                                 
=================================================================
Total params: 18672 (72.94 KB)
Trainable params: 18672 (72.94 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Model: "autoencoder"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 img (InputLayer)            [(None, 28, 28, 1)]       0         
                                                                 
 conv2d (Conv2D)             (None, 26, 26, 16)        160       
                                                                 
 conv2d_1 (Conv2D)           (None, 24, 24, 32)        4640      
                                                                 
 max_pooling2d (MaxPooling2  (None, 8, 8, 32)          0         
 D)                                                              
                                                                 
 conv2d_2 (Conv2D)           (None, 6, 6, 32)          9248      
                                                                 
 conv2d_3 (Conv2D)           (None, 4, 4, 16)          4624      
                                                                 
 global_max_pooling2d (Glob  (None, 16)                0         
 alMaxPooling2D)                                                 
                                                                 
 reshape (Reshape)           (None, 4, 4, 1)           0         
                                                                 
 conv2d_transpose (Conv2DTr  (None, 6, 6, 16)          160       
 anspose)                                                        
                                                                 
 conv2d_transpose_1 (Conv2D  (None, 8, 8, 32)          4640      
 Transpose)                                                      
                                                                 
 up_sampling2d (UpSampling2  (None, 24, 24, 32)        0         
 D)                                                              
                                                                 
 conv2d_transpose_2 (Conv2D  (None, 26, 26, 16)        4624      
 Transpose)                                                      
                                                                 
 conv2d_transpose_3 (Conv2D  (None, 28, 28, 1)         145       
 Transpose)                                                      
                                                                 
=================================================================
Total params: 28241 (110.32 KB)
Trainable params: 28241 (110.32 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

እዚህ, decoding መዋቅር በ coding መዋቅር ጋር ጥንካሬ symmetrical ነው, ስለዚህ output ቅርጸት input ቅርጸት ጋር ተመሳሳይ ነው(28, 28, 1).

ከ A ከ A ከConv2Dአግኝቷል aConv2DTransposeአግኝቷል, እና በ AMaxPooling2DአግኝቷልUpSampling2Dአግኝቷል

ሁሉም ሞዴሎች እንደ layers እንደ callable ናቸው

እርስዎ ማንኛውም ሞዴል እርስዎ አንድ ቅርንጫፍ እንደ መጠቀም ይችላሉ.Inputአንድ ሞዴል ይደውሉ, አንተ ብቻ ሞዴል መዋቅር መጠቀም አይችልም, አንተ ደግሞ በውስጡ ክብደት መጠቀም ይሆናል.

ይህ መተግበሪያ ውስጥ ለማየት, አንድ የ autoencoder ሞዴል, አንድ decoder ሞዴል ለመፍጠር እና በ autoencoder ሞዴል ለማግኘት በሁለት ጥቅሞች ውስጥ ያካትታል በ autoencoder ሞዴል ላይ አንድ የተለያዩ መተግበሪያ ይመልከቱ:


encoder_input = keras.Input(shape=(28, 28, 1), name="original_img")
x = layers.Conv2D(16, 3, activation="relu")(encoder_input)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.MaxPooling2D(3)(x)
x = layers.Conv2D(32, 3, activation="relu")(x)
x = layers.Conv2D(16, 3, activation="relu")(x)
encoder_output = layers.GlobalMaxPooling2D()(x)

encoder = keras.Model(encoder_input, encoder_output, name="encoder")
encoder.summary()

decoder_input = keras.Input(shape=(16,), name="encoded_img")
x = layers.Reshape((4, 4, 1))(decoder_input)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu")(x)
x = layers.UpSampling2D(3)(x)
x = layers.Conv2DTranspose(16, 3, activation="relu")(x)
decoder_output = layers.Conv2DTranspose(1, 3, activation="relu")(x)

decoder = keras.Model(decoder_input, decoder_output, name="decoder")
decoder.summary()

autoencoder_input = keras.Input(shape=(28, 28, 1), name="img")
encoded_img = encoder(autoencoder_input)
decoded_img = decoder(encoded_img)
autoencoder = keras.Model(autoencoder_input, decoded_img, name="autoencoder")
autoencoder.summary()


Model: "encoder"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 original_img (InputLayer)   [(None, 28, 28, 1)]       0         
                                                                 
 conv2d_4 (Conv2D)           (None, 26, 26, 16)        160       
                                                                 
 conv2d_5 (Conv2D)           (None, 24, 24, 32)        4640      
                                                                 
 max_pooling2d_1 (MaxPoolin  (None, 8, 8, 32)          0         
 g2D)                                                            
                                                                 
 conv2d_6 (Conv2D)           (None, 6, 6, 32)          9248      
                                                                 
 conv2d_7 (Conv2D)           (None, 4, 4, 16)          4624      
                                                                 
 global_max_pooling2d_1 (Gl  (None, 16)                0         
 obalMaxPooling2D)                                               
                                                                 
=================================================================
Total params: 18672 (72.94 KB)
Trainable params: 18672 (72.94 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Model: "decoder"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 encoded_img (InputLayer)    [(None, 16)]              0         
                                                                 
 reshape_1 (Reshape)         (None, 4, 4, 1)           0         
                                                                 
 conv2d_transpose_4 (Conv2D  (None, 6, 6, 16)          160       
 Transpose)                                                      
                                                                 
 conv2d_transpose_5 (Conv2D  (None, 8, 8, 32)          4640      
 Transpose)                                                      
                                                                 
 up_sampling2d_1 (UpSamplin  (None, 24, 24, 32)        0         
 g2D)                                                            
                                                                 
 conv2d_transpose_6 (Conv2D  (None, 26, 26, 16)        4624      
 Transpose)                                                      
                                                                 
 conv2d_transpose_7 (Conv2D  (None, 28, 28, 1)         145       
 Transpose)                                                      
                                                                 
=================================================================
Total params: 9569 (37.38 KB)
Trainable params: 9569 (37.38 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
Model: "autoencoder"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 img (InputLayer)            [(None, 28, 28, 1)]       0         
                                                                 
 encoder (Functional)        (None, 16)                18672     
                                                                 
 decoder (Functional)        (None, 28, 28, 1)         9569      
                                                                 
=================================================================
Total params: 28241 (110.32 KB)
Trainable params: 28241 (110.32 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________

እርስዎ መውሰድ ይችላሉ እንደ, ሞዴል ሊሆን ይችላል: አንድ ሞዴል አዲሱ ሞዴሎች ሊሆን ይችላል (አንድ ሞዴል እንደ አንድ ጥቅል ነው).አጠቃቀምለምሳሌ, አንድ ሞዴሎች ስብስቦችን አንድ ሞዴል ውስጥ እንዴት ያካትታል, ይህም የእነርሱ ጥቅሞችን አነስተኛ ያካትታል:


def get_model():
    inputs = keras.Input(shape=(128,))
    outputs = layers.Dense(1)(inputs)
    return keras.Model(inputs, outputs)


model1 = get_model()
model2 = get_model()
model3 = get_model()

inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)

የኮምፒውተር የኮምፒውተር የኮምፒውተር

ብዙ input እና outputs ጋር ሞዴሎች

የፕሮግራም መተግበሪያ (API) በይፋ መተግበሪያዎች እና መተግበሪያዎች ይጠቀማል. ይህ መተግበሪያ በይፋ መተግበሪያዎች ጋር መጠቀም አይችልም.Sequentialአግኝቷል

ለምሳሌ, እርስዎ ደንበኞች ልውውጥ ቲኬቶች በመጀመሪያ ደረጃ ለማግኘት እና እነርሱን ትክክለኛ ክፍሎች ወደ ለመርዳት አንድ ሥርዓት ለመፍጠር ይችላሉ, ከዚያም ሞዴል የሦስት መተግበሪያዎች ይኖራሉ:

  • የምስክር ወረቀት ስም (ኮምፒውተር መግቢያ)
  • የፕላስቲክ ምልክቶች (የፕላስቲክ ምልክቶች) እና
  • ማንኛውም መለያዎች ተጠቃሚው ያካትታል (kategorical input)

ይህ ሞዴል ሁለት ውፅዓት ይሆናል:

  • የ 0 እና 1 (Scalar Sigmoid Output) መካከል የ prioritet ጥቅል, እና
  • የፕሮጀክት ማረፊያ ማረፊያ (Softmax output over the set of departments)

ይህ ሞዴል በ Functional API ጋር አንዳንድ መስኮች ውስጥ ለመፍጠር ይችላሉ:


num_tags = 12  # Number of unique issue tags
num_words = 10000  # Size of vocabulary obtained when preprocessing text data
num_departments = 4  # Number of departments for predictions

title_input = keras.Input(
    shape=(None,), name="title"
)  # Variable-length sequence of ints
body_input = keras.Input(shape=(None,), name="body")  # Variable-length sequence of ints
tags_input = keras.Input(
    shape=(num_tags,), name="tags"
)  # Binary vectors of size `num_tags`

# Embed each word in the title into a 64-dimensional vector
title_features = layers.Embedding(num_words, 64)(title_input)
# Embed each word in the text into a 64-dimensional vector
body_features = layers.Embedding(num_words, 64)(body_input)

# Reduce sequence of embedded words in the title into a single 128-dimensional vector
title_features = layers.LSTM(128)(title_features)
# Reduce sequence of embedded words in the body into a single 32-dimensional vector
body_features = layers.LSTM(32)(body_features)

# Merge all available features into a single large vector via concatenation
x = layers.concatenate([title_features, body_features, tags_input])

# Stick a logistic regression for priority prediction on top of the features
priority_pred = layers.Dense(1, name="priority")(x)
# Stick a department classifier on top of the features
department_pred = layers.Dense(num_departments, name="department")(x)

# Instantiate an end-to-end model predicting both priority and department
model = keras.Model(
    inputs=[title_input, body_input, tags_input],
    outputs=[priority_pred, department_pred],
)

አሁን ሞዴል ይጫኑ:


keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)


ይህ ሞዴል ማዘጋጀት ጊዜ, እያንዳንዱ ውፅዓት የተለያዩ ውፅዓት መሳሰሉ ይችላሉ. እያንዳንዱ ውፅዓት ወደ የተለያዩ ውፅዓት መሳሰሉ ይችላሉ.


model.compile(
    optimizer=keras.optimizers.RMSprop(1e-3),
    loss=[
        keras.losses.BinaryCrossentropy(from_logits=True),
        keras.losses.CategoricalCrossentropy(from_logits=True),
    ],
    loss_weights=[1.0, 0.2],
)

የ output layers የተለያዩ ስም አላቸው, እርስዎ ደግሞ ተስማሚ layer names ጋር የሙቀት እና የሙቀት ክብደት ያካትታል:


model.compile(
    optimizer=keras.optimizers.RMSprop(1e-3),
    loss={
        "priority": keras.losses.BinaryCrossentropy(from_logits=True),
        "department": keras.losses.CategoricalCrossentropy(from_logits=True),
    },
    loss_weights={"priority": 1.0, "department": 0.2},
)

በ NumPy መተግበሪያዎች እና መተግበሪያዎች መካከል ዝርዝርዎችን በመውሰድ ወደ ሞዴል ያግኙ:


# Dummy input data
title_data = np.random.randint(num_words, size=(1280, 10))
body_data = np.random.randint(num_words, size=(1280, 100))
tags_data = np.random.randint(2, size=(1280, num_tags)).astype("float32")

# Dummy target data
priority_targets = np.random.random(size=(1280, 1))
dept_targets = np.random.randint(2, size=(1280, num_departments))

model.fit(
    {"title": title_data, "body": body_data, "tags": tags_data},
    {"priority": priority_targets, "department": dept_targets},
    epochs=2,
    batch_size=32,
)


Epoch 1/2
40/40 [==============================] - 8s 112ms/step - loss: 1.2982 - priority_loss: 0.6991 - department_loss: 2.9958
Epoch 2/2
40/40 [==============================] - 3s 64ms/step - loss: 1.3110 - priority_loss: 0.6977 - department_loss: 3.0666
<keras.src.callbacks.History at 0x7f08d51fab80>

አግኙን ይደውሉ ፡፡Datasetእርስዎ እንደ አንድ ፎቶ መጻፍ ይችላሉ.([title_data, body_data, tags_data], [priority_targets, dept_targets])ነገር ግን እንደ አንድ የኮምፒውተር({'title': title_data, 'body': body_data, 'tags': tags_data}, {'priority': priority_targets, 'department': dept_targets}).

ተጨማሪ ዝርዝሮች ለማግኘት, በስልጠና እና ግምገማመመሪያዎች

አንድ ተስማሚ ሞዴል

ከብዙ መተግበሪያዎች እና ውሂብዎች ጋር ሞዴሎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ከብዙ መተግበሪያዎች ናቸው.Sequentialየፋይበር መጠቀም አይችልም.

ለ CIFAR10 አንድ ተጫዋች ResNet ሞዴል ለመፍጠር ይህን ማረጋገጥ ይሆናል:


inputs = keras.Input(shape=(32, 32, 3), name="img")
x = layers.Conv2D(32, 3, activation="relu")(inputs)
x = layers.Conv2D(64, 3, activation="relu")(x)
block_1_output = layers.MaxPooling2D(3)(x)

x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_1_output)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
block_2_output = layers.add([x, block_1_output])

x = layers.Conv2D(64, 3, activation="relu", padding="same")(block_2_output)
x = layers.Conv2D(64, 3, activation="relu", padding="same")(x)
block_3_output = layers.add([x, block_2_output])

x = layers.Conv2D(64, 3, activation="relu")(block_3_output)
x = layers.GlobalAveragePooling2D()(x)
x = layers.Dense(256, activation="relu")(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(10)(x)

model = keras.Model(inputs, outputs, name="toy_resnet")
model.summary()


Model: "toy_resnet"
__________________________________________________________________________________________________
 Layer (type)                Output Shape                 Param #   Connected to                  
==================================================================================================
 img (InputLayer)            [(None, 32, 32, 3)]          0         []                            
                                                                                                  
 conv2d_8 (Conv2D)           (None, 30, 30, 32)           896       ['img[0][0]']                 
                                                                                                  
 conv2d_9 (Conv2D)           (None, 28, 28, 64)           18496     ['conv2d_8[0][0]']            
                                                                                                  
 max_pooling2d_2 (MaxPoolin  (None, 9, 9, 64)             0         ['conv2d_9[0][0]']            
 g2D)                                                                                             
                                                                                                  
 conv2d_10 (Conv2D)          (None, 9, 9, 64)             36928     ['max_pooling2d_2[0][0]']     
                                                                                                  
 conv2d_11 (Conv2D)          (None, 9, 9, 64)             36928     ['conv2d_10[0][0]']           
                                                                                                  
 add (Add)                   (None, 9, 9, 64)             0         ['conv2d_11[0][0]',           
                                                                     'max_pooling2d_2[0][0]']     
                                                                                                  
 conv2d_12 (Conv2D)          (None, 9, 9, 64)             36928     ['add[0][0]']                 
                                                                                                  
 conv2d_13 (Conv2D)          (None, 9, 9, 64)             36928     ['conv2d_12[0][0]']           
                                                                                                  
 add_1 (Add)                 (None, 9, 9, 64)             0         ['conv2d_13[0][0]',           
                                                                     'add[0][0]']                 
                                                                                                  
 conv2d_14 (Conv2D)          (None, 7, 7, 64)             36928     ['add_1[0][0]']               
                                                                                                  
 global_average_pooling2d (  (None, 64)                   0         ['conv2d_14[0][0]']           
 GlobalAveragePooling2D)                                                                          
                                                                                                  
 dense_6 (Dense)             (None, 256)                  16640     ['global_average_pooling2d[0][
                                                                    0]']                          
                                                                                                  
 dropout (Dropout)           (None, 256)                  0         ['dense_6[0][0]']             
                                                                                                  
 dense_7 (Dense)             (None, 10)                   2570      ['dropout[0][0]']             
                                                                                                  
==================================================================================================
Total params: 223242 (872.04 KB)
Trainable params: 223242 (872.04 KB)
Non-trainable params: 0 (0.00 Byte)
__________________________________________________________________________________________________

ወደ ሞዴል ያውቃል:


keras.utils.plot_model(model, "mini_resnet.png", show_shapes=True)

አሁን ሞዴል መውሰድ:


(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()

x_train = x_train.astype("float32") / 255.0
x_test = x_test.astype("float32") / 255.0
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)

model.compile(
    optimizer=keras.optimizers.RMSprop(1e-3),
    loss=keras.losses.CategoricalCrossentropy(from_logits=True),
    metrics=["acc"],
)
# We restrict the data to the first 1000 samples so as to limit execution time
# on Colab. Try to train on the entire dataset until convergence!
model.fit(x_train[:1000], y_train[:1000], batch_size=64, epochs=1, validation_split=0.2)


13/13 [==============================] - 4s 39ms/step - loss: 2.3086 - acc: 0.0988 - val_loss: 2.3020 - val_acc: 0.0850
<keras.src.callbacks.History at 0x7f078810c880>

በመጀመሪያ በ TensorFlow ድር ጣቢያ ላይ የተለጠፈው, ይህ ጽሑፍ እዚህ አንድ አዲስ ምልክት ውስጥ ይመልከቱ እና በ CC BY 4.0. ኮድ ናሙናዎች በ Apache 2.0 License መሠረት ይሰጣል.

በመጀመሪያ በ TensorFlow ድር ጣቢያ ላይ የተለጠፈው, ይህ ጽሑፍ እዚህ አንድ አዲስ ምልክት ውስጥ ይመልከቱ እና በ CC BY 4.0. ኮድ ናሙናዎች በ Apache 2.0 License መሠረት ይሰጣል.

የ TensorFlow


Trending Topics

blockchaincryptocurrencyhackernoon-top-storyprogrammingsoftware-developmenttechnologystartuphackernoon-booksBitcoinbooks