[CI] Update image tag to 20240126-070121-8ade9c30e#16435
Conversation
|
cc @Hzfengsy |
e7edc07 to
243e9dd
Compare
|
The CI failed during |
243e9dd to
482229c
Compare
|
updated to the newer image (tag:20240126-070121-8ade9c30e), it was built on Jan 26. CI is green now. Actually the current CI doesn’t really use the s3 bucket tvm-sccache-prod for sccache. Instead, the local sccahe cache is used. I have a testing pr to hardcode an invalid bucket and CI still passed. The sccache 0.3.3 doesn’t verify the aws credential during the —start-server, whereas the newer sccache 0.7.x does. In this PR, I changed it to use local cache if no aws credentials is available. I conducted a benchmark comparing sccache with and without an S3 bucket on an AWS g5.4xlarge instance. The results showed that the build time without the S3 bucket (using a local cache, with an average build time of 219.48 seconds) is faster compared to using an S3 bucket (232.01 seconds average build time). Given these findings, I recommend not enabling the S3 bucket for sccache at this time. Regarding AWS credentials, they are currently not passed to the worker container. One method is to pass AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY through the container as shown here. However, this approach raises security concerns. We should explore alternatives like using IAM roles, task IAMs, or similar solutions for future needs, especially if we decide to enable S3 for sccache. Useful references include discussions on the best practices for passing AWS credentials to Docker containers (Stack Overflow) and information about IAM roles for Amazon EC2 (AWS EC2 User Guide) and task IAM roles (AWS ECS Developer Guide). |
update the images to align with the latest upgrades of emsdk and nodejs.