Align metrics.get() metric name with identifier alias (tf.keras parity)#22121
Align metrics.get() metric name with identifier alias (tf.keras parity)#22121MalyalaKarthik66 wants to merge 2 commits intokeras-team:masterfrom
Conversation
Summary of ChangesHello @MalyalaKarthik66, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request aims to enhance the consistency and predictability of metric naming within Keras. By modifying the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request aligns metrics.get() behavior with tf.keras by using the string identifier as the metric's name, which is a good improvement for consistency. The implementation uses a try-except block to handle metric instantiation, which I've provided a suggestion to make more robust against masking unrelated TypeError exceptions.
keras/src/metrics/__init__.py
Outdated
| try: | ||
| obj = obj(name=identifier) | ||
| except TypeError: | ||
| obj = obj() |
There was a problem hiding this comment.
The use of a broad except TypeError can mask legitimate TypeError exceptions raised from within the metric's __init__ method. For instance, if a metric's constructor that accepts a name argument raises a TypeError for an internal reason, this exception will be silently caught, and the code will fall back to instantiating the metric without arguments. This could lead to unexpected behavior and hide bugs.
A more robust approach is to inspect the constructor's signature to determine if it can be called with just the name argument, avoiding the try-except block. This ensures that only argument-related instantiation issues are handled gracefully, while other TypeErrors are allowed to propagate.
| try: | |
| obj = obj(name=identifier) | |
| except TypeError: | |
| obj = obj() | |
| sig = inspect.signature(obj.__init__) | |
| params = sig.parameters | |
| has_other_req_args = any( | |
| p.name not in ("self", "name") | |
| and p.default is inspect.Parameter.empty | |
| and p.kind == inspect.Parameter.POSITIONAL_OR_KEYWORD | |
| for p in params.values() | |
| ) | |
| if "name" in params and not has_other_req_args: | |
| obj = obj(name=identifier) | |
| else: | |
| obj = obj() |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #22121 +/- ##
==========================================
- Coverage 82.80% 82.79% -0.01%
==========================================
Files 592 592
Lines 63146 63153 +7
Branches 9920 9922 +2
==========================================
+ Hits 52287 52290 +3
- Misses 8311 8313 +2
- Partials 2548 2550 +2
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
hertschuh
left a comment
There was a problem hiding this comment.
Thanks for addressing this.
Can you provide an example of when tf_keras and keras 3 differ? In fact it should be part of a unit test.
Here is a concrete example: Before this change: After this change (matching tf.keras behavior): Example unit test: |
So, I tested with Keras 2: import os
os.environ["TF_USE_LEGACY_KERAS"] = "True"
import tensorflow as tf
from tensorflow import keras
print(keras.metrics.get("mse").name)In this example, it's not a name difference. Keras 2 resolves "mse" to the |
|
@hertschuh You’re absolutely right — thanks for checking this in Keras 2. This is not strict tf.keras parity. In Keras 2, "mse" resolves to a function, so there is no This PR targets Keras 3 specifically. In Keras 3, aliases like "mse" resolve to Metric class instances, so using the alias as the default metric name (when no explicit name is provided) is clearer and matches what users expect. I’ve updated the PR description to remove the tf.keras parity claim. |
But this already works without your change: import keras
model = keras.Sequential([keras.layers.Dense(1)])
model.compile(optimizer="adam", loss="mse", metrics=["mae"])
history = model.fit(np.random.rand(10, 10), np.random.rand(10, 1), epochs=1)
print(history.history)The metric in the output is name "mae" not "mean_absolute_error". |
Uh oh!
There was an error while loading. Please reload this page.