this post was submitted on 10 Jun 2023
23 points (96.0% liked)

Programming

568 readers
1 users here now

All things programming and coding related. Subcommunity of Technology.


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 1 year ago
MODERATORS
 

I've not yet had time for reading the full article but the topic is really interesting. This is for sure one great and good application of deep learning. Anyone with some insights on this topic?

top 7 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 1 year ago (1 children)

At first glance this doesn't seem to be as impactful as you might think...

We reverse engineered the low-level assembly sorting algorithms discovered by AlphaDev for sort 3, sort 4 and sort 5 to C++ and discovered that our sort implementations led to improvements of up to 70% for sequences of a length of five and roughly 1.7% for sequences exceeding 250,000 elements.

[–] [email protected] 6 points 1 year ago (1 children)

Wouldn't this still be pretty massive at large scales like datacenters?

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

Yes, I'm saying this is actually more important than you might initially think.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (2 children)

I'm afraid it would take a lot of time for me to fully understand this paper, and I know a thing or two about Computer Science.

I'm positive you can't get any better than n log (n) when sorting a vector of size n using comparisons, and the most widely used algorithm, QuickSort, gets very close to that performance. I guess the improvement here only involves some constant factor, but I can't be sure without actually reading the paper. Everyone, feel free to correct me ofc c:

[–] [email protected] 4 points 1 year ago

I gave it a quick skim. It seems the improvements are on specific sort tasks. There may also be fitting to the distribution of sequences. It won’t beat n log(n) for all cases, but it might do better for common common situations in the data set.

It’s still neat they made it work.

[–] [email protected] 2 points 1 year ago

My first thoughts were on the same line. I think the new "algorithm" are heuristics for small values of n.

[–] [email protected] 1 points 1 year ago

Sorry, I'm now seeing this was already posted in this community a few days ago :(

load more comments
view more: next ›