lysdexic

joined 1 year ago
MODERATOR OF
[–] [email protected] 1 points 9 minutes ago (1 children)

So that’s where I would say, as long as performance doesn’t matter it’s better to default to B-Tree maps than to hash maps, because the chance of avoiding bugs is more valuable than immeasurable performance benefits (...)

I don't quite follow. What leads you to believe that a B-Tree map implementation would have a lower chance of having a bug when you can simply pick any standard and readily available hash map implementation?

Also, you fail to provide any concrete reasoning for b-tree maps. It's not performance on any of the dictionary operationd, and bugs ain't it as well. What's the selling point that you are seeing?

[–] [email protected] 1 points 9 hours ago (3 children)

the reason I tend to recommend B-Tree maps over hash maps for ordinary programming is consistent iteration order.

Hash maps tend to be used to take advantage of constant time lookup and insertion, not iterations. Hash maps aren't really suites for that usecase.

Programming languages tend to provide two standard dictionary containers: a hash map implementation suited for lookups and insertions, and a tree-based hash map that supports sorting elements by key.

[–] [email protected] 0 points 1 day ago* (last edited 1 day ago) (2 children)

Yeah, the quality on Lemmy is nowhere (...)

Go ahead and contribute things that you find interesting instead of wasting your time whining about what others might like.

So far, all you're contributing is whiny shitposting. You can find plenty of that I'm Reddit too.

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago)

It’s from 2015, so its probably what you are doing anyway

No, you are probably not using this at all. The problem with JSON is that this details are all handled in an implementation-defined way, and most implementation just fail/round silently.

Just give it a try and send down the wire a JSON with, say, a huge integer, and see if that triggers a parsing error. For starters, in .NET both Newtonsoft and System.Text.Json set a limit of 64 bits.

https://learn.microsoft.com/en-us/dotnet/api/system.text.json.jsonserializeroptions.maxdepth

[–] [email protected] 3 points 1 day ago* (last edited 1 day ago)

Why restrict to 54-bit signed integers?

Because number is a double, and IEEE754 specifies the mantissa of double-precision numbers as 53bits+sign.

Meaning, it's the highest integer precision that a double-precision object can express.

I suppose that makes sense for maximum compatibility, but feels gross if we’re already identifying value types.

It's not about compatibility. It's because JSON only has a number type which covers both floating point and integers, and number is implemented as a double-precision value. If you have to express integers with a double-precision type, when you go beyond 53bits you will start to experience loss of precision, which goes completely against the notion of an integer.

[–] [email protected] 0 points 1 day ago* (last edited 1 day ago) (3 children)

The only think that TCP_NODELAY does is disabling packet batching/merging through Naggle's algorithm. Supposedly that increases throughput by reducing the volume of redundant information required to send small data payloads in individual packets, with the tradeoff of higher latency. It's a tradeoff between latency and throughput. I don't see any reason for transfer rates to lower; quite the opposite. In fact the very few benchmarks I saw showed exactly that: TCP_NODELAY causing a drop in the transfer rate.

There are also articles on the cargo cult behind TCP_NODELAY.

But feel free to show your data.

[–] [email protected] 3 points 1 day ago (5 children)

A reminder that TCP_NODELAY should be set by default,

Why do you believe that?

[–] [email protected] 16 points 1 day ago (1 children)
[–] [email protected] 0 points 1 day ago

It’s very hard for “Safe C++” to exist when integer overflow is UB.

You could simply state you did not read the article and decided to comment out of ignorance.

If you spent one minute skimming through the article, you would have stumbled upon the section on undefined behavior. Instead, you opted to post ignorant drivel.

[–] [email protected] 4 points 1 day ago

I wouldn’t call bad readability a loaded gun really.

Bad readability is a problem cause by the developer, not the language. Anyone can crank out unreadable symbol soup in any language, if that's what they want/can deliver.

Blaming the programming language for the programmer's incompetence is very telling, so telling there's even a saying: A bad workman always blames his tools.

[–] [email protected] 5 points 1 day ago* (last edited 1 day ago)

Well, auto looks just like var in that regard.

It really isn't. Neither in C# nor in Java. They are just syntactic sugar to avoid redundant type specifications. I mean things like Foo foo = new Foo();. Who gets confused with that?

Why do you think IDEs are able to tell which type a variable is?

Even C# takes a step further and allows developer to omit the constructor with their target-typed new expressions. No one is whining about dynamic types just because the language let's you instantiate an object with Foo foo = new();.

[–] [email protected] 1 points 1 day ago (1 children)

I think I could have states my opinion better. I think LLMs total value remains to be seen. They allow totally incompetent developers to occasionally pass as below average developers.

This is a baseless assertion from your end, and a purely personal one.

My anecdotal evidence is that the best software engineers I know use these tools extensively to get rid of churn and drudge work, and they apply it anywhere and everywhere they can.

23
CPU Flame Graphs (www.brendangregg.com)
5
submitted 5 days ago* (last edited 5 days ago) by [email protected] to c/[email protected]
view more: next ›