-
Notifications
You must be signed in to change notification settings - Fork 203
Add locks to local hash index and add basic concurrent insertion tests #5828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report❌ Patch coverage is
❌ Your patch status has failed because the patch coverage (75.41%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage. Additional details and impacted files@@ Coverage Diff @@
## master #5828 +/- ##
==========================================
- Coverage 85.97% 85.94% -0.03%
==========================================
Files 1620 1620
Lines 73600 73807 +207
Branches 8798 8828 +30
==========================================
+ Hits 63277 63434 +157
- Misses 10096 10142 +46
- Partials 227 231 +4
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Benchmark ResultMaster commit hash:
Other queries
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@benjaminwinger should comeback and review the lock
void reserveInserts(uint64_t newEntries) { localInsertions.reserve(newEntries); } | ||
void reserveInserts(uint64_t newEntries) { | ||
std::unique_lock xLock{mtx}; | ||
localInsertions.reserve(newEntries); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is the right behaviour here (though it should be safe). If we have multiple threads bulk reserving we should reserve the combined amount.
This also applies to reserveSpaceForAppend
, and maybe the solution should be to track capacity explicitly and merge the two functions, and in both situations just add to the capacity (even now, I think reserveInserts
could be replaced by reserveSpaceForAppend
since I think reserveInserts
is only called when the number of existing entries is zero). I think that should be safe, since the local storage gets cleared during rollbacks, so if there is an interruption we won't just keep growing the capacity if the reserve is tried again.
Description
See title.
Some performance numbers of 1M node insertions (1 insertion per transaction). The scaling bottleneck mainly comes from locks within the index.
COPY performance is almost the same on the LDBC 100 comment table.