-
由 Olivier Delalleau 提交于
The previous mechanism could take over a minute with a big cache. It did two checks: 1. (expensive check) Compare pair-wise all keys loaded, and find those that are equal even though they have a different hash. 2. (cheap one) Ensure that unpickling a pickled key yields a key that is equal to the original one. Those checks are now replaced by a new check that is performed each time a new key is saved in the cache. We reload the pickled KeyData object and ensure that it contains exactly one unpickled key equal to the key originally saved. It is obvious it will catch errors that would have been caught by check #2. It will also catch errors that would have been caught by check #1 because if two keys are equal, they must yield a similar module hash, and thus they will be stored in the same KeyData object: if they have a different key hash, they will both appear in the 'keys' set of the KeyData object, and thus the new check will complain there is more than one key found to be equal to the one we just saved. NB: Also removed a couple TODOs now that I understand better why some code was written that way.3f0d61d4