Fix: Large table causes sql error#2242
Fix: Large table causes sql error#2242Koc wants to merge 2 commits intofeature/speedup-value-formattingfrom
Conversation
410c7ff to
e497d0b
Compare
4e43530 to
f792683
Compare
3eb630d to
491ee41
Compare
491ee41 to
b0c0685
Compare
Signed-off-by: Kostiantyn Miakshyn <molodchick@gmail.com>
Signed-off-by: Kostiantyn Miakshyn <molodchick@gmail.com>
f792683 to
f599f40
Compare
enjeck
left a comment
There was a problem hiding this comment.
Testing locally:
10k rows before vs after I don't notice any speedups. How are you measuring the timing?
And 100k rows fails for both
| $row->setLastEditAt($sleeve['last_edit_at']); | ||
| $row->setTableId((int)$sleeve['table_id']); | ||
|
|
||
| $cachedCells = json_decode($sleeve['cached_cells'] ?? '{}', true); |
There was a problem hiding this comment.
Are we falling back to the normal way if nothing is returned from cached_cells somewhere else?
There was a problem hiding this comment.
no, it's not needed as we are caching all rows during migration, see OCA\Tables\Migration\CacheSleeveCells and we're updating cached_cells on any cell edit. So, I have no any use-case when cached_cells can be missing/outdated.
| foreach ($columnIds as $columnId) { | ||
| $columns[$columnId] = $this->columnMapper->find($columnId); | ||
| $mappers[$columnId] = $this->columnsHelper->getCellMapperFromType($columns[$columnId]->getType()); |
There was a problem hiding this comment.
Looks like we run into the n+1 problem?
There was a problem hiding this comment.
Not sure that this is a big problem here, as we're calling ColumnMapper::preloadColumns() before Row2Mapper::getRows() call and it memoizes loaded columns
|
hey @enjeck!
I'm really surprised that you haven't any performance improvement with my PRs. I've spent some time and recorded screencast for you that compares:
nextcloud-tables-performance-2026-01-27_17.42.33.mp4
I'm so sorry, but are you sure that you have switched to a correct branch? Can you share xlsx file with table data, so I can import it on my instance and re-test? BTW as we're talking about importing of the large files - I highly recommend to merge another my PR #1801 😃 |
This PR built on top of #2238.
Closes #1490.
It fixes loading of the large tables. Right now it is not possible to load a table with 30k rows and 8 columns due to an error:
PR contains 2 commits: fix itself and performance improvement.
I've measured latency for a various scenarios for
/row/table/{tableId}endpoint:Next step: move filtering/sorting/pagination to BE side and speedup FE. But this is completely separate topic which will be implemented in upcoming PR.