cuốn sách gpt4 ai đã làm

python - 如何使用 tf.train.Checkpoint 在 tensorflow 2.0 中保存和加载选定变量和所有变量?

In lại Tác giả: Walker 123 更新时间:2023-11-28 21:33:00 hai mươi bốn 4
mua khóa gpt4 Nike

如何将如下所示的 tensorflow 2.0 中的选定变量保存在一个文件中,并使用 tf.train.Checkpoint 将它们加载到另一个代码中的某些已定义变量中?

class manyVariables:
def __init__(self):
self.initList = [None]*100
for i in range(100):
self.initList[i] = tf.Variable(tf.random.normal([5,5]))
self.makeSomeMoreVariables()

def makeSomeMoreVariables(self):
self.moreList = [None]*10
for i in range(10):
self.moreList[i] = tf.Variable(tf.random.normal([3,3]))

def saveVariables(self):
# how to save self.initList's 3,55 and 60th elements and self.moreList's 4th element

此外,请展示如何保存所有变量并使用 tf.train.Checkpoint 重新加载。提前致谢。

câu trả lời hay nhất

我不确定这是不是你的意思,但你可以创建一个 tf.train.Checkpoint专门针对要保存和恢复的变量的对象。请参阅以下示例:

import tensorflow as tf

class manyVariables:
def __init__(self):
self.initList = [None]*100
for i in range(100):
self.initList[i] = tf.Variable(tf.random.normal([5,5]))
self.makeSomeMoreVariables()
self.ckpt = self.makeCheckpoint()

def makeSomeMoreVariables(self):
self.moreList = [None]*10
for i in range(10):
self.moreList[i] = tf.Variable(tf.random.normal([3,3]))

def makeCheckpoint(self):
return tf.train.Checkpoint(
init3=self.initList[3], init55=self.initList[55],
init60=self.initList[60], more4=self.moreList[4])

def saveVariables(self):
self.ckpt.save('./ckpt')

def restoreVariables(self):
status = self.ckpt.restore(tf.train.latest_checkpoint('.'))
status.assert_consumed() # Optional check

# Create variables
v1 = manyVariables()
# Assigned fixed values
for i, v in enumerate(v1.initList):
v.assign(i * tf.ones_like(v))
for i, v in enumerate(v1.moreList):
v.assign(100 + i * tf.ones_like(v))
# Save them
v1.saveVariables()

# Create new variables
v2 = manyVariables()
# Check initial values
print(v2.initList[2].numpy())
# [[-1.9110833 0.05956204 -1.1753829 -0.3572553 -0.95049495]
# [ 0.31409055 1.1262076 0.47890127 -0.1699607 0.4409122 ]
# [-0.75385517 -0.13847834 0.97012395 0.42515194 -1.4371008 ]
# [ 0.44205236 0.86158335 0.6919655 -2.5156968 0.16496429]
# [-1.241602 -0.15177743 0.5603795 -0.3560254 -0.18536267]]
print(v2.initList[3].numpy())
# [[-3.3441594 -0.18425298 -0.4898144 -1.2330629 0.08798431]
# [ 1.5002227 0.99475247 0.7817361 0.3849587 -0.59548247]
# [-0.57121766 -1.277224 0.6957546 -0.67618763 0.0510064 ]
# [ 0.85491985 0.13310803 -0.93152267 0.10205163 0.57520276]
# [-1.0606447 -0.16966362 -1.0448577 0.56799036 -0.90726566]]

# Restore them
v2.restoreVariables()
# Check values after restoring
print(v2.initList[2].numpy())
# [[-1.9110833 0.05956204 -1.1753829 -0.3572553 -0.95049495]
# [ 0.31409055 1.1262076 0.47890127 -0.1699607 0.4409122 ]
# [-0.75385517 -0.13847834 0.97012395 0.42515194 -1.4371008 ]
# [ 0.44205236 0.86158335 0.6919655 -2.5156968 0.16496429]
# [-1.241602 -0.15177743 0.5603795 -0.3560254 -0.18536267]]
print(v2.initList[3].numpy())
# [[3. 3. 3. 3. 3.]
# [3. 3. 3. 3. 3.]
# [3. 3. 3. 3. 3.]
# [3. 3. 3. 3. 3.]
# [3. 3. 3. 3. 3.]]

如果你想保存列表中的所有变量,你可以用这样的东西替换makeCheckpoint:

def makeCheckpoint(self):
return tf.train.Checkpoint(
**{f'init{i}': v for i, v in enumerate(self.initList)},
**{f'more{i}': v for i, v in enumerate(self.moreList)})

请注意,您可以拥有“嵌套”检查点,因此,更一般地说,您可以拥有一个为变量列表创建检查点的函数,例如:

def listCheckpoint(varList):
# Use 'item{}'.format(i) if using Python <3.6
return tf.train.Checkpoint(**{f'item{i}': v for i, v in enumerate(varList)})

那么你可以这样:

def makeCheckpoint(self):
return tf.train.Checkpoint(init=listCheckpoint(self.initList),
more=listCheckpoint(self.moreList))

关于python - 如何使用 tf.train.Checkpoint 在 tensorflow 2.0 中保存和加载选定变量和所有变量?,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/55262614/

hai mươi bốn 4 0
Walker 123
Hồ sơ

Tôi là một lập trình viên xuất sắc, rất giỏi!

Nhận phiếu giảm giá taxi Didi miễn phí
Phiếu giảm giá taxi Didi
Chứng chỉ ICP Bắc Kinh số 000000
Hợp tác quảng cáo: 1813099741@qq.com 6ren.com
Xem sitemap của VNExpress