this post was submitted on 02 Oct 2023
277 points (96.0% liked)

Programming

17001 readers
262 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] [email protected] 44 points 11 months ago* (last edited 11 months ago) (2 children)

That depends on your definition of correct lmao. Rust explicitly counts utf-8 scalar values, because that's the length of the raw bytes contained in the string. There are many times where that value is more useful than the grapheme count.

[โ€“] [email protected] 16 points 11 months ago (2 children)

And rust also has the "๐Ÿคฆ".chars().count() which returns 1.

I would rather argue that rust should not have a simple len function for strings, but since str is only a byte slice it works that way.

Also also the len function clearly states:

This length is in bytes, not chars or graphemes. In other words, it might not be what a human considers the length of the string.

[โ€“] [email protected] 11 points 11 months ago

None of these languages should have generic len() or size() for strings, come to think of it. It should always be something explicit like bytes() or chars() or graphemes(). But they're there for legacy reasons.

[โ€“] [email protected] 10 points 11 months ago (1 children)

That Rust function returns the number of codepoints, not the number of graphemes, which is rarely useful. You need to use a facepalm emoji with skin color modifiers to see the difference.

The way to get a proper grapheme count in Rust is e.g. via this library: https://crates.io/crates/unicode-segmentation

[โ€“] [email protected] 9 points 11 months ago (1 children)

Makes sense, the code-points split is stable; meaning it's fine to put in the standard library, the grapheme split changes every year so the volatility is probably better off in a crate.

[โ€“] [email protected] 7 points 11 months ago (1 children)

Yeah, although having now seen two commenters with relatively high confidence claiming that counting codepoints ought be enough...

...and me almost having been the third such commenter, had I not decided to read the article first...

...I'm starting to feel more and more like the stdlib should force you through all kinds of hoops to get anything resembling a size of a string, so that you gladly search for a library.

Like, I've worked with decoding strings quite a bit in the past, I felt like I had an above average understanding of Unicode as a result. And I was still only vaguely aware of graphemes.

[โ€“] [email protected] 1 points 11 months ago (1 children)

For what it's worth, the documentation is very very clear on what these methods return. It explicitly redirects you to crates.io for splitting into grapheme clusters. It would be much better to have it in std, but I understand the argument that Std should only contain stable stuff.

As a systems programming language the .len() method should return the byte count IMO.

[โ€“] [email protected] 2 points 11 months ago

The problem is when you think you know stuff, but you don't. I knew that counting bytes doesn't work, but thought the number of codepoints was what I want. And then knowing that Rust uses UTF-8 internally, it's logical that .chars().count() gives the number of codepoints. No need to read documentation, if you're so smart. ๐Ÿ™ƒ

It does give you the correct length in quite a lot of cases, too. Even the byte length looks correct for ASCII characters.

So, yeah, this would require a lot more consideration whether it's worth it, but I'm mostly thinking there'd be no .len() on the String type itself, and instead to get the byte count, you'd have to do .as_bytes().len().

[โ€“] [email protected] 7 points 11 months ago (2 children)

Yeah, and as much as I understand the article saying there should be an easily accessible method for grapheme count, it's also kind of mad to put something like this into a stdlib.

Its behaviour will break with each new Unicode standard. And you'd have to upgrade the whole stdlib to keep up-to-date with the newest Unicode standards.

[โ€“] [email protected] 4 points 11 months ago* (last edited 11 months ago) (2 children)

~~The way UTF-8 works is fixed though, isn't it? A new Unicode standard should not change that, so as long as the string is UTF-8 encoded, you can determine the character count without needing to have the latest Unicode standard.~~

~~Plus in Rust, you can instead use .chars().count() as Rust's char type is UTF-8 Unicode encoded, thus strings are as well.~~

turns out one should read the article before commenting

[โ€“] [email protected] 6 points 11 months ago (1 children)

No offense, but did you read the article?

You should at least read the section "Wouldnโ€™t UTF-32 be easier for everything?" and the following two sections for the context here.

So, everything you've said is correct, but it's irrelevant for the grapheme count.
And you should pretty much never need to know the number of codepoints.

[โ€“] [email protected] 3 points 11 months ago (1 children)

yup, my bad. Frankly I thought grapheme meant something else, rather stupid of me. I think I understand the issue now and agree with you.

[โ€“] [email protected] 3 points 11 months ago

No worries, I almost commented here without reading the article, too, and did not really know what graphemes are beforehand either. ๐Ÿซ 

[โ€“] [email protected] 2 points 11 months ago

Nope, the article says that what is and is not a grapheme cluster changes between unicode versions each year :)

[โ€“] [email protected] 4 points 11 months ago

It might make more sense to expose a standard library API for unicode data provided by (and updated with) the operating system. Something like the time zone database.