lazily evaluate the integrator coefficients#311
Conversation
|
Introducing the class and requiring that consumers instantiate it makes the code harder to use. What do you think about using module-level |
|
Additionally I don't follow the reasoning why not doing linear algebra on import is better style.
Why do you claim it is better? |
Two reasons why it might not be better.
This is not true and the whole reason I would like to make this change. Calling a |
Fair, although end of main support window for 3.6 was 2018 and NEP 29 claims we're solidly in 3.7 territory. On the one hand, I understand we don't want to make a minor release for this bug, and we wouldn't want to bump dependencies for a patch release. However we are changing undocumented, but also not underscored module attributes, so the current implementation could be a breaking change.
Something like this should work (or one out of many alternative implementations): @lru_cache
def _coefficients():
eps = np.spacing(1)
# the nodes and Newton polynomials
ns = (5, 9, 17, 33)
xi = [-np.cos(np.linspace(0, np.pi, n)) for n in ns]
# Make `xi` perfectly anti-symmetric, important for splitting the intervals
xi = [(row - row[::-1]) / 2 for row in xi]
# Compute the Vandermonde-like matrix and its inverse.
V = [calc_V(x, n) for x, n in zip(xi, ns)]
V_inv = list(map(scipy.linalg.inv, V))
Vcond = [scipy.linalg.norm(a, 2) * scipy.linalg.norm(b, 2) for a, b in zip(V, V_inv)]
# Compute the shift matrices.
T_left, T_right = [V_inv[3] @ calc_V((xi[3] + a) / 2, ns[3]) for a in [-1, 1]]
# If the relative difference between two consecutive approximations is
# lower than this value, the error estimate is considered reliable.
# See section 6.2 of Pedro Gonnet's thesis.
hint = 0.1
# Smallest acceptable relative difference of points in a rule. This was chosen
# such that no artifacts are apparent in plots of (i, log(a_i)), where a_i is
# the sequence of estimates of the integral value of an interval and all its
# ancestors..
min_sep = 16 * eps
ndiv_max = 20
# set-up the downdate matrix
k = np.arange(ns[3])
alpha = np.sqrt((k + 1) ** 2 / (2 * k + 1) / (2 * k + 3))
gamma = np.concatenate([[0, 0], np.sqrt(k[2:] ** 2 / (4 * k[2:] ** 2 - 1))])
b_def = calc_bdef(ns)
return locals()
def __getattr__(attr):
return _coefficients()[attr] |
|
I have rebased this on top of the latest master (which has the Python ≥ 3.7 requirement) so all tests should now pass. |
Codecov Report
@@ Coverage Diff @@
## master #311 +/- ##
==========================================
+ Coverage 80.50% 80.57% +0.06%
==========================================
Files 35 35
Lines 4633 4638 +5
Branches 834 834
==========================================
+ Hits 3730 3737 +7
+ Misses 778 777 -1
+ Partials 125 124 -1
Continue to review full report at Codecov.
|
Description
Currently, the having the following in a script:
breaks any Python process because of pytorch/pytorch#54063.
This is not a proper fix but avoiding the problem.
Howver, rather than doing any math on import, I think it is a better style to only evaluate the coefficients when required.
Checklist
pre-commit run --all(first install usingpip install pre-commit)pytestpassedType of change
Check relevant option(s).