Torch save pickle protocol That being said, I prefer to push the model to CPU first before saving the state_dict. save,torch. pkl') I have tried using the compress-pickle package, but it actually increases the file size. If it’s slow then it is our fault, not Pickle’s. PathLike, BinaryIO], # 带写入的文件 pickle_module = pickle, # 默认使用 pickle 进行序列化 pickle_protocol = DEFAULT_PROTOCOL, # 默认使用 pickle 第2版协议 _use_new_zipfile_serialization = True)-> None: # pytorch 1. But when I use pickle I have this problem that when I load the weights, if I am in a different gpu then I would get the following error: Attempting May 10, 2017 · apaszke changed the title Enhancement: support model compression on save Support arbitrary file-like objects in torch. save()2 加载模型2. load` uses Python's unpickling facilities but treats storages, which underlie tensors, specially. save()和pickle. py里的_save: def _save(obj, zip_file, pickle_module, pickle_protocol): serialized_storages = {} id_map: Dict[int, str] = {} def persistent_id(obj): # FIXME: the docs say that persistent_id should only return a string # but torch store returns tuples. evyqnursdigdzydlpdkocarekxjxouegflnrpfsusufpaagjloftjhdwrccnswdahlwskcjvromrdwsl