LSTM的loss和accuracy近乎不变问题 |
您所在的位置:网站首页 › lstm输出全一样 › LSTM的loss和accuracy近乎不变问题 |
LSTM的loss和accuracy近乎不变问题
test.py import random import tensorflow as tf import numpy as np import re import jieba from gensim.models import word2vec label_list = ['虚假评论', '真实评论'] model = tf.keras.models.load_model(r'model_lstm1.h5') contents = ['好吃,非常不错,口感挺软的,不会觉得噎,不会干是属于比较新鲜的口感。', '口味不错,热量也不高。吃着非常好,非常愉快的一次购物体验。', '做工蛮精细的,木材表面磨得很平滑。'] # random.shuffle(contents) # 预处理 word = word2vec.Word2Vec.load('word2vec.model') r = r"[a-zA-Z0-9\s+\.\!\/_,$%^*(+\"\']+|[+——!,。?、:;;《》“”~@#¥%……&*()]+" l = [] for content in contents: data = re.sub(r, '', content) fc = list(jieba.cut(data)) print(fc) data_list = np.zeros((200, 100)) num = 0 for j in fc: if j in word.wv: data_list[num] = list(word.wv[j]) else: data_list[num] = [0] num += 1 print(data_list) l.append(data_list) # 和数据集训练方式一样,是数据列表,且需要np.array(l)转格式,不然会报错 content = np.array(l) # 检查其架构 model.summary() # print(l) predictions = model.predict(content, verbose=0) print(predictions) # for i, j in enumerate(predictions): # print(contents[i]) # if j > 0.5: # print('真实评论') # else: # print('虚假评论') # print()现在又不行了,哎,LSTM这玩意是真的玄学 部分成功样例 双输出二分类损失函数 model = models.Sequential() model.add(layers.LSTM(50, input_shape=(200, 100))) model.add(layers.Dense(2, activation='softmax')) # compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(train_x, train_y, epochs=50, batch_size=1, verbose=2)
|
今日新闻 |
推荐新闻 |
CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3 |