-
Notifications
You must be signed in to change notification settings - Fork 460
[WIP] New pooling for mysql and asynpg #141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
7448ef1 to
ac85459
Compare
|
Earlier I used a lock The lock mechanism is a bit heavy handed right now, so I'd welcome a more elegant solution. |
ac85459 to
6b41690
Compare
6b41690 to
f39cb9d
Compare
f39cb9d to
3318587
Compare
57816c3 to
dd3b839
Compare
dd3b839 to
3820c78
Compare
|
@grigi Could you help me out there please? If you currently have time to have a look at it - it would be great, cause I think this PR bring quite important functionality |
|
I'll have a look. I agree this is important. I tried a few times to "fix" this issue, but it was causing trouble. One solution would be to drop Py3.5, as it is going to be EOL'ed in a few months? |
| # | ||
| # async def _tearDownDB(self) -> None: | ||
| # _restore_default() | ||
| # await self.transaction.rollback() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this is broken, then practically everything to do with transactions are broken…
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh wow, it creates a new connection, and does not honour the current connection.
This is with py3.7 and postgresql on my system.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The context is not kept. Basically the same issue I had with my attempts.
|
I think fundamentally we don't manage the context correctly. Or rather, the way we try and map a sync-like style of functionality to async world is a problem. A possible situation is to allow transparent transactions only for: This is part of the reason I wanted to change the whole transactionwrapper pattern, as it becomes super difficult to reason about it. In a threaded context the contextvar is thread-local. For coroutines it is scope-local, and unless we carefully control the entiere scope it is going to be impossible. Also: This PR changes so much, I have difficulty seeing what it is you even did :-( |
🤦♂ Feel really stupid because I never realised it before
I think we could leave (or somehow rename)
Real changes are moving Also I changed default test runner to pytest, mostly because more I was working with |
This took me a LONG time to realise. Only after I did the concurrency fixes without any pooling did it become apparent.
Yes, we will likely have to break API here. That's OK. If we break compat here, we couls add a deprecated-exception so that people know where to look for the change.
Yes, I have similar thoughts. A single DB client should manage its own state. The issue is how do we pass an equivalent of I would strongly recommend to try fix the transaction consistency issues before adding connection pooling, because once we can manage context, the pool stuff becomes easy. Note the expected-failure test case in tortoise/tests/test_concurrency.py (test_concurrency_transactions_create) → That one breaks because transactions are not handled with enough isolation (ContextVar issue), and will even fail with a single connection on SQLite. So I'd test the transaction changes to get that to work right first, just to minimise the side-effects and keep the changes focused. Also, note that py3.7 has much more fine-grained context-vars behaviour so if you can get it working on that, and then with 3.6 with the polyfill, I'll be ecstatic. Also, I saw you generated the requirements on my 3.5. (black was excluded). I used to always run this on py3.6, but after changing my dev environment to 3.7 I had to do what is in this commit: b4a9ce7 |
|
@grigi hi, I am using tortoise in my company's project now, it requires very high concurrency, thus connection pool is required. I've tried to replace backend/base/client.py and backend/asyncpg/client.py with this PR, did a little pressure test, and it worked out pretty fine. So basically I was wondering why are you so concerned about the contextvars inside base/client.py, cuz from what I see, it first initialized as the AsyncpgDBClient object, then when a transaction started, the _current_transaction is set to the current TransactionWrapper object, then when trying to finish the transaction, the _current_transaction is set to AsyncpgDBClient back again, could you please tell me what is the use case of this contextvars and what potential problems could it cause,. btw, I am using postgresql for my project. Appreciated! |
|
Hi @15390413627 There is some more background to what was going on here: #69 (comment) But generally the problem comes down to we need to provide support. So it needs to work well for every supported situation. So, I don't want to put something out there that I know is broken badly for one of our supported cases. One of the issues with the current system is concurrent accessing when transaction is going on, some context leaks across the transaction boundaries, which is unacceptable for some systems, and acceptable behaviour for others. I'll have a look again. We gave decided to deprecate py3.5, which opens up more modern ways of doing async context managers, which I think should help resolve the last pressing issue, as we should now have more control over the stack. As an intermediate solution, how do you propose we do this to support your use case over the next while until we get everything perfect? |
|
Hi @grigi, I am @15390413627, so don't get confused, I just don't like that phonenumber name github gave me ... anyways, In order to make my project work, and 2 days struggle, I finally make it work. So check it out #176, I did this only to satisfy my need, high concurrency support for postgresql only. I only did pressure test, I did not run the unit tests cuz the weird problems on my macbook. So this is my intermediate solution until you guys make everything alright. So what do you think? |
|
Ah, @Quinn-Fang Thanks for clearing that up. I wondered about the weird number. I'll look at it :-) |
|
superseded by #229 |
Closes #140