Description
Zstd in general:
Zstd:chunked:
copy.Options.EnsureCompressionVariantsExist would never stop adding zstd:chunked variants: copy.Options.EnsureCompressionVariantsExist doesn’t detect existing variants with zstd:chunked #201
See if the BlobCache needs updating for Zstd / Zstd:chunked: https://issues.redhat.com/browse/RUN-1124
The chunked manifest size has a hard-coded limit, and pulling large layers outright fails: pulling image with zstd:chunked compression is returning manifest too big error podman#24885
Exercise Podman tests with Zstd:chunked default: DO NOT MERGE: Run tests with Zstd:chunked compression default podman#21903
“view ambiguity”: https://issues.redhat.com/browse/RHEL-66492 , Compute an uncompressed digest for chunked layers storage#2155 , Expect UncompressedDigest to be set for partial pulls, enforce DiffID match image#2613 , Zstd:chunked podman-side tests podman#25007
Pulls with missing parent directories fail: https://issues.redhat.com/browse/RUN-2364 : Fixed in chunked: handle creating root directory storage#2194
Reusing already-present data can be optimized: Extend PutLayer to optimize reusing data from existing layers storage#1830 , Update a comment after a c/storage update image#2299 , Propagate CompressedDigest/CompressedSize when reusing data from another layer image#2583
Podman’s zstd:chunked-specific tests are not representative: Fix apparent typos in zstd:chunked tests podman#24686
Pulls on VFS fail outright: podman pull zstd:chunked-image, with vfs: not supported podman#24308 , Fall back from partial pull when on VFS storage#2140
expectedLayerDiffIDFlag seems to use mismatching types: expectedLayerDiffIDFlag expectation mismatch image#2602 , expectedLayerDiffIDFlag expectation mismatch image#2602
Document zstd:chunked and encryption interaction: Document that zstd:chunked doesn’t make sense with encryption common#2117 , Document that zstd:chunked is downgraded to zstd when encrypting common#2176 , Document that zstd:chunked is downgraded to zstd when encrypting skopeo#2427 , Document that zstd:chunked is downgraded to zstd when encrypting buildah#5759 , Document that zstd:chunked is downgraded to zstd when encrypting podman#24113
Propose iterateTarSplit to upstream tar-split: Ensure chunked TOC and tar-split metadata are consistent storage#2035 (comment) . Filed Add tar/asm.IterateHeaders vbatts/tar-split#71 , Use tar-split/tar/asm.IterateHeaders now that it has been accepted storage#2116 .
blobPipelineDetectCompressionStep detects zstd:chunked as zstd, causing unnecessary recompression: around copy: do not fail if digest mismatches image#1980 (comment) , Record zstd:chunked format, and annotations, in BlobInfoCache image#2487 .
Pushes reusing chunked layers don’t create the required annotations, making the layers impossible to pull chunked: [WIP] compression: add specific prefix for zstd:chunked image#2183 (review) / WIP HACK: Do not reuse zstd:chunked blobs image#2185 , Record zstd:chunked format, and annotations, in BlobInfoCache image#2487
zstd:chunked and layer encryption don’t make sense together: zstd:chunked and layer encryption don’t make sense together image#2485
Blocker? TOC data and tar-split may be ambiguous: zstd:chunked metadata ambiguity storage#2014 , Ensure chunked TOC and tar-split metadata are consistent storage#2035
c/storage AdditionalLayerStore seems to be used for TOC-identified layers, incorrectly: Support additional layer store (patch for containers/storage) storage#795 (comment) , [Additional Layer Store] Use TOCDigest as ID of each layer (patch for c/storage) storage#1924
TarSplitChecksumKey not used in a layer ID: zstd:chunked blocker: TarSplitChecksumKey not used in a layer ID storage#1888 , Move the tar-split digest into the TOC storage#1902
blobPipelineCompressionStep would trigger a recompression of zstd:chunked if the user asks for zstd (except that we don’t currently detect zstd:chunked): around copy: do not fail if digest mismatches image#1980 (comment) , Improve handling of zstd vs. zstd:chunked matching image#2317
The c/storage “binary footer” code path does not work: zstd:chunked binary footer format is broken storage#1886
Outstanding items from chunked: generate tar-split as part of zstd:chunked storage#1627 (review) : Chunked cleanups storage#1844
Copies of chunked layers don’t mark the image as requiring Zstd support: Copies don’t set OCI1InstanceAnnotationCompressionZSTD on Zstd:chunked image#2077 , fix minor review comment about the driver mutex and a minor lint issue storage#2302
Chunked layers are visible in c/storage, and usable by other processes, while they are still being populated: Creation of Zstd:chunked layers seems racy image#1979 , store: new API ApplyStagedLayer storage#1826 + idmap: fix first argument to open_tree storage#2301 .
PutBlobPartial with a non-chunked input and conversion in c/storage enabled doesn’t work: Fix c/storage destination with partial pulls image#2288 (comment) , fixed there
Pushes of chunked layers fail because the UncompressedDigest fields stores a TOC digest, and layer digest validation fails: Fixed in copy: do not fail if digest mismatches image#1980
Pulls of chunked layers do not reuse locally-existing layers: Part of copy: do not fail if digest mismatches image#1980
In c/storage, we don’t sufficiently differentiate layers pulled by blob digest vs. TOC digest, and reuse is unclear: Part of copy: do not fail if digest mismatches image#1980 , and chunked: generate tar-split as part of zstd:chunked storage#1627 (comment) . Fixed in RFC: Use Go 1.23 iterators for locking+traversing stores. storage#2288 .
Pulls may reuse layers pulled by blob digests vs. TOC digests: Part of copy: do not fail if digest mismatches image#1980 . Fixed in RFC: Use Go 1.23 iterators for locking+traversing stores. storage#2288 .
Image deduplication contains a (required!) sanity check that TopLayer matches, but that might not be true if layers pulled by blob vs. TOC digests have different IDs: Part of copy: do not fail if digest mismatches image#1980 . Fixed in RFC: Use Go 1.23 iterators for locking+traversing stores. storage#2288 .
Reuse of local layers by TOC may trigger inconsistent metadata updates: [crio-1.31] chore(deps): update dependency golangci/golangci-lint to v1.64.8 storage#2294
A pull by TOC + conversion to non-OCI might parse data incorrectly: chore(deps): update dependency golangci/golangci-lint to v2 storage#2295
Reactions are currently unavailable
You can’t perform that action at this time.
docker-archive:destinations): Copies of originally-compressed images from c/storage to uncompressed destinations don’t trigger MIME type updates image#2182copy.Options.EnsureCompressionVariantsExistwould never stop adding zstd:chunked variants: copy.Options.EnsureCompressionVariantsExist doesn’t detect existing variants with zstd:chunked #201zstd:chunkedcompression is returningmanifest too bigerror podman#24885CompressedDigest/CompressedSizewhen reusing data from another layer image#2583expectedLayerDiffIDFlagseems to use mismatching types: expectedLayerDiffIDFlag expectation mismatch image#2602 , expectedLayerDiffIDFlag expectation mismatch image#2602iterateTarSplitto upstream tar-split: Ensure chunked TOC and tar-split metadata are consistent storage#2035 (comment) . Filed Add tar/asm.IterateHeaders vbatts/tar-split#71 , Use tar-split/tar/asm.IterateHeaders now that it has been accepted storage#2116 .blobPipelineDetectCompressionStepdetects zstd:chunked as zstd, causing unnecessary recompression: around copy: do not fail if digest mismatches image#1980 (comment) , Record zstd:chunked format, and annotations, in BlobInfoCache image#2487 .blobPipelineCompressionStepwould trigger a recompression of zstd:chunked if the user asks for zstd (except that we don’t currently detect zstd:chunked): around copy: do not fail if digest mismatches image#1980 (comment) , Improve handling of zstd vs. zstd:chunked matching image#2317PutBlobPartialwith a non-chunked input and conversion in c/storage enabled doesn’t work: Fix c/storage destination with partial pulls image#2288 (comment) , fixed thereUncompressedDigestfields stores a TOC digest, and layer digest validation fails: Fixed in copy: do not fail if digest mismatches image#1980TopLayermatches, but that might not be true if layers pulled by blob vs. TOC digests have different IDs: Part of copy: do not fail if digest mismatches image#1980 . Fixed in RFC: Use Go 1.23 iterators for locking+traversing stores. storage#2288 .