Kissaki

joined 1 year ago
[–] [email protected] 8 points 2 weeks ago (2 children)

The Roast my profile in question.

Your 105 repos resemble a crowded yard sale, filled with half-baked ideas and a couple of dusty gems.

lol

[–] [email protected] 1 points 2 weeks ago

That's so sweet, I love it!

it's amazing to see the breadth of projects you've worked on and shared with the world!

😊

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago)

It’s been shown that AI isn’t at a level where using it for anything isn’t beneficial, in fact it’s the contrary.

Maybe you're thinking of something more specific than me, but I don't think that's the case. What is being called AI is a broad field.

I think what Opus was able to implement for high packet-loss voice transmission is exceptional.

I also find Visual Studio in-line-inline-completions to be very useful.

That's far from the typical Chatbots and whatnot though. Which often suck.

[–] [email protected] 3 points 2 weeks ago (1 children)

At work, we recently talked about AI. One use case mentioned (by an AI consulting firm, not us or actually suggested for us) was meeting summaries and extracting TODOs from them.

My stance is that AI could be useful for summaries about topics so you can see what topics were being talked about. But I would never trust it with extracting the or all significant points, TODOs, or agreements. You still need humans to do that, and have explicit agreement and confirmation of the list in or after the meeting.

It can also help to transcribe meetings. It could even translate them. Those things can be useful. But summarization should never be considered factual extraction of the significant points. Especially in a business context, or anything else where you actually care about being able to trust information.

I wouldn't [fully] trust it with transforming facts either. It can work where you can spot inaccuracies (long text, lots of context), or where you don't care about them.

Natural language instructions to machine instructions? I'd certainly be careful with that, and want to both contextualize and test-confirm it works well enough for the use case and context.

[–] [email protected] 1 points 3 weeks ago (1 children)

I'm glad I work on software that has value, where I control the entire ecosystem, and where my contributions are significant.

[–] [email protected] 1 points 3 weeks ago

You could say you're in good company

[–] [email protected] 2 points 3 weeks ago

You point to Valve as a success story, but the "pick the work you want" also lead to less deliverables and focus and they had to refocus that approach. Free pick and experimentation is fine until you get to a point where you want to get something out the door - when it's a bigger thing, and you need more and focused people, to bring it to the finish line.


I can't speak how it would be elsewhere and everywhere, but I can speak from personal experience how my workplace is set up.

We're relatively small, work for various customers, some continuous and some contract-scoped. Developers work and speak either directly to and with customers, or have at most one person "in between" that is part of usually our team.

We have an agile and collaborative mindset, and often guide our customers into productive workflows.

Being on relatively small teams, with opportunity for high personal impact, and with agency, I was able to take initiative and work in a way I am very satisfied with. I am able to prioritize myself, collaborate with my customer to understand their needs, understandings, and priorities, and then make my decisions - explicitly or implicitly. Two-week plannings give good checkpoints to review and reassess intended priorities - which are only guides. Stuff comes up that takes priority anyway, be it from the customer, or improving code when you stumble upon it.

I'm glad to be on my current team where the customer pays monthly for how much we worked, so no repeated contract work estimation. I can and do decide on what makes sense, and we communicate on priorities, planning, and consequences. Either I decide or we discuss whether one or another solution makes more sense considering effort, significance, and degree of solution or acceptableness. One person from the customer is our direct gate to them, participates in meetings, planning, tickets, prioritization. They block all of their requests to us, and communicate to and with us on what they deem important enough. And they are our gateway to asking the customers roles and people regarding usage, functionality, needs, etc.

For me, this environment is perfect. It allows me to collaborate with the customers to match their long term needs.

I think it needs good enough developers though. There's those that are mindful and actively invested, but also people who are not. Some become great productive workers with guidance and experience, but it doesn't always fit. I feel like a lack of proactive good development given the environment and agency isn't a given, but I don't think "management" improves that. You're putting a manager on top in hopes they're a person like that. But why couldn't that be a team member in the first place?

Managers and more strict role splitting becomes more necessary or efficient the bigger you scale. I feel like smaller projects and teams are more efficient and satisfactory. You have less people and communication interfaces. And as a developer, you probably know that interfaces [between systems] are one of the biggest issue causers.

For context, I am Lead Developer (became when we introduced those roles explicitly), and our team size was 2 for a long time, but has now been 4 for a while, and is now 3 developers +1 now in semi-retirement working only half of the year.

[–] [email protected] 12 points 3 weeks ago (1 children)

I hadn’t even really heard of Codeberg at the time.

Codeberg didn't exist back then yet.

[–] [email protected] 21 points 3 weeks ago (1 children)

make bare got repositories

got it

[–] [email protected] 6 points 3 weeks ago

I did a bunch of other experiments, which didn't make things faster:

Also particularly interesting what didn't work.

[–] [email protected] 2 points 3 weeks ago

They have the blog post date in the title but I don't see it on the page. Header head nor bottom.

 

cross-posted from: https://programming.dev/post/11720354

UI Components: Smart Paste, Smart TextArea, Smart ComboBox

Dependency: Azure Cloud

They show an interesting new kind of interactivity. (Not that I, personally, would ever use Azure Cloud for that though.)

 

UI Components: Smart Paste, Smart TextArea, Smart ComboBox

Dependency: Azure Cloud

They show an interesting new kind of interactivity. (Not that I, personally, would ever use Azure Cloud for that though.)

 

Backwards compatibility is a key principle in .NET, and this means that packages targeting previous .NET versions, like ‘net6.0’ or ‘net7.0’, are also compatible with ‘net8.0’. […]

The new “Include compatible frameworks” option we added allows you to flip between filtering by explicit asset frameworks and the larger set of ‘compatible’ frameworks. Filtering by packages’ compatible frameworks now reveals a much larger set of packages for you to choose from.

 

Truly astonishing how much generalized modding seems to be possible through general DirectX (8/9) interfaces and official Nvidia provided tooling.

As an AMD graphics card user, it's very unfortunate that RTX/this functionality is proprietary/exclusive Nvidia. The tooling at least. The produced results supposedly should work on other graphics cards too (I didn't find official/upstream docs about it).

For more technical details of how it works, see the GameWorks wiki:

 

cross-posted from: https://programming.dev/post/11034601

There's a lot, and specifically a lot of machine learning talk and features in the 1.5 release of Opus - the free and open audio codec.

Audible and continuous (albeit jittery) talk on 90% packet loss is crazy.

Section WebRTC IntegrationSamples has an example where you can test out the 90 % packet loss audio.

 

There's a lot, and specifically a lot of machine learning talk and features in the 1.5 release of Opus - the free and open audio codec.

Audible and continuous (albeit jittery) talk on 90% packet loss is crazy.

Section WebRTC IntegrationSamples has an example where you can test out the 90 % packet loss audio.

 

Describes considerations of convenience and security of auto-confirmation while entering a numeric PIN - which leads to information disclosure considerations.

An attacker can use this behavior to discover the length of the PIN: Try to sign in once with some initial guess like “all ones” and see how many ones can be entered before the system starts validating the PIN.

Is this a problem?

view more: ‹ prev next ›