Skip to content

Conversation

@weiyilwy
Copy link
Contributor

add benchmark script for flux and qwen image under examples folder

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @weiyilwy, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new comprehensive benchmarking script designed to evaluate the performance of image generation models, specifically FLUX and Qwen-Image. The script allows for testing various optimization configurations, such as FP8 linear, Torch compile, and CPU offloading, and provides detailed profiling capabilities through CUDA timeline traces. This addition aims to facilitate performance analysis and optimization efforts for these models.

Highlights

  • New Benchmark Script: Implemented a new multi-model performance benchmark script (model_perf_benchmark.py) under the examples directory.
  • Model and Optimization Support: Enabled benchmarking for FLUX and Qwen-Image models with various optimization modes including basic, FP8 linear, Torch compile, FP8 + compile, and CPU offloading.
  • Performance Profiling: Added functionality to generate detailed CUDA timeline traces using torch.profiler for in-depth performance analysis.
  • Documentation: Provided comprehensive documentation for the benchmark script in model_benchmark_readme.md, covering features, usage examples, and output details.
  • Gitignore Update: Updated .gitignore to exclude local Claude settings.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive benchmark script for flux and qwen_image models, along with a corresponding README file. The script is well-structured, using an object-oriented approach to support different models and optimization modes. My review focuses on improving the script's robustness, maintainability, and providing more comprehensive profiling data. Key suggestions include removing reliance on internal PyTorch APIs, enabling memory profiling for better analysis, and refactoring repetitive code blocks to improve maintainability. I've also pointed out a minor compatibility issue with type hints.

"with_stack": True,
"with_flops": True,
"with_modules": True,
"experimental_config": torch._C._profiler._ExperimentalConfig(verbose=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The use of torch._C._profiler._ExperimentalConfig relies on an internal, undocumented PyTorch API. This is risky as it can break without warning in future PyTorch versions. It's recommended to use public APIs for stability and maintainability. If the goal of verbose=True is to get more detailed traces, consider if the information provided by the public profiler arguments is sufficient. For now, it's safer to remove it to avoid potential breakages.

profiler_args = {
"activities": [ProfilerActivity.CPU, ProfilerActivity.CUDA],
"record_shapes": True,
"profile_memory": False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The profiler is configured with profile_memory=False. For a comprehensive performance benchmark, memory usage is a crucial metric, especially when comparing different optimization techniques like CPU offloading. Enabling memory profiling would provide valuable insights into the memory footprint of the models and pipelines.

Suggested change
"profile_memory": False,
"profile_memory": True,

Comment on lines +102 to +141
def create_pipeline(self, mode: str):
"""Create FLUX pipeline with specified optimization mode."""
config = None

if mode == "fp8":
config = self.config_class(
model_path=self.model_path,
device="cuda",
model_dtype=torch.bfloat16,
use_fp8_linear=True,
)
elif mode == "compile":
config = self.config_class(
model_path=self.model_path,
device="cuda",
model_dtype=torch.bfloat16,
use_torch_compile=True,
)
elif mode == "fp8_compile":
config = self.config_class(
model_path=self.model_path,
device="cuda",
model_dtype=torch.bfloat16,
use_fp8_linear=True,
use_torch_compile=True,
)
elif mode == "offload":
config = self.config_class(
model_path=self.model_path,
device="cuda",
model_dtype=torch.bfloat16,
offload_mode="sequential_cpu_offload",
)
else: # basic mode
config = self.config_class.basic_config(
model_path=self.model_path,
device="cuda",
)

return self.pipeline_class.from_pretrained(config)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The create_pipeline method contains a long if/elif/else chain to handle different modes. This leads to code repetition, as many configuration parameters are shared across modes. This can be refactored to be more concise and maintainable by using a dictionary to store mode-specific configurations and merging them with a base configuration. This would make it easier to add or modify modes in the future.

    def create_pipeline(self, mode: str):
        """Create FLUX pipeline with specified optimization mode."""
        common_config = {
            "model_path": self.model_path,
            "device": "cuda",
            "model_dtype": torch.bfloat16,
        }
        mode_configs = {
            "fp8": {"use_fp8_linear": True},
            "compile": {"use_torch_compile": True},
            "fp8_compile": {"use_fp8_linear": True, "use_torch_compile": True},
            "offload": {"offload_mode": "sequential_cpu_offload"},
        }

        if mode in mode_configs:
            config_args = {**common_config, **mode_configs[mode]}
            config = self.config_class(**config_args)
        else:  # basic mode
            config = self.config_class.basic_config(
                model_path=self.model_path,
                device="cuda",
            )

        return self.pipeline_class.from_pretrained(config)

Comment on lines +159 to +217
def create_pipeline(self, mode: str):
"""Create Qwen-Image pipeline with specified optimization mode."""
config = None

if mode == "fp8":
config = self.config_class(
model_path=self.model_path,
encoder_path=self.encoder_path,
vae_path=self.vae_path,
device="cuda",
model_dtype=torch.bfloat16,
encoder_dtype=torch.bfloat16,
vae_dtype=torch.float32,
use_fp8_linear=True,
)
elif mode == "compile":
config = self.config_class(
model_path=self.model_path,
encoder_path=self.encoder_path,
vae_path=self.vae_path,
device="cuda",
model_dtype=torch.bfloat16,
encoder_dtype=torch.bfloat16,
vae_dtype=torch.float32,
use_torch_compile=True,
)
elif mode == "fp8_compile":
config = self.config_class(
model_path=self.model_path,
encoder_path=self.encoder_path,
vae_path=self.vae_path,
device="cuda",
model_dtype=torch.bfloat16,
encoder_dtype=torch.bfloat16,
vae_dtype=torch.float32,
use_fp8_linear=True,
use_torch_compile=True,
)
elif mode == "offload":
config = self.config_class(
model_path=self.model_path,
encoder_path=self.encoder_path,
vae_path=self.vae_path,
device="cuda",
model_dtype=torch.bfloat16,
encoder_dtype=torch.bfloat16,
vae_dtype=torch.float32,
offload_mode="sequential_cpu_offload",
)
else: # basic mode
config = self.config_class.basic_config(
model_path=self.model_path,
encoder_path=self.encoder_path,
vae_path=self.vae_path,
device="cuda",
)

return self.pipeline_class.from_pretrained(config)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to FluxBenchmark, the create_pipeline method in this class has a repetitive if/elif/else structure. Refactoring this to use a dictionary for mode-specific settings would improve code clarity and maintainability.

    def create_pipeline(self, mode: str):
        """Create Qwen-Image pipeline with specified optimization mode."""
        common_config = {
            "model_path": self.model_path,
            "encoder_path": self.encoder_path,
            "vae_path": self.vae_path,
            "device": "cuda",
            "model_dtype": torch.bfloat16,
            "encoder_dtype": torch.bfloat16,
            "vae_dtype": torch.float32,
        }
        mode_configs = {
            "fp8": {"use_fp8_linear": True},
            "compile": {"use_torch_compile": True},
            "fp8_compile": {"use_fp8_linear": True, "use_torch_compile": True},
            "offload": {"offload_mode": "sequential_cpu_offload"},
        }

        if mode in mode_configs:
            config_args = {**common_config, **mode_configs[mode]}
            config = self.config_class(**config_args)
        else:  # basic mode
            config = self.config_class.basic_config(
                model_path=self.model_path,
                encoder_path=self.encoder_path,
                vae_path=self.vae_path,
                device="cuda",
            )

        return self.pipeline_class.from_pretrained(config)

if args.model == "flux":
print("Fetching default FLUX model...")
from diffsynth_engine import fetch_model
model_path: str | List[str] = fetch_model("muse/flux-with-vae", path="flux1-dev-with-vae.safetensors")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The type hint str | List[str] uses the new union operator |, which was introduced in Python 3.10. To ensure compatibility with older Python 3 versions, it's better to use Union[str, List[str]] from the typing module. You'll need to import Union from typing at the top of the file.

            model_path: "Union[str, List[str]]" = fetch_model("muse/flux-with-vae", path="flux1-dev-with-vae.safetensors")

@weiyilwy weiyilwy merged commit 6047ee1 into main Aug 26, 2025
@weiyilwy weiyilwy deleted the feature/add_benchmark_script_20250826 branch August 26, 2025 08:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants