sh.itjust.works

27,659 readers
1,044 users here now

Useful Links

Rules:

Règles :

Fediseer
Fediseer
Matrix

Other UI options (more to come)

Monitoring Services
lemmy-meter.info

founded 1 year ago
ADMINS

Scheduled Instance upgrade September 20th 8PM EDT. More info here

76
77
0
Why Pay A Pentester? (thehackernews.com)
submitted 9 hours ago by [email protected] to c/cybersecurity
 
 

What do you guys think? I don't think there's a lot of depth to the arguments, myself. It reads more like an threadbare op-ed with a provocative title. But I'd like to hear you opinions on the impact of automated testing solutions on the role of pen-testers in the industry.

78
79
266
Something's off... (sh.itjust.works)
submitted 1 day ago by kersploosh to c/funny
 
 
80
 
 

cw:

spoiler

they say "retarded" at around 1:40 (the video is 15 years old)

81
 
 

Mistral Small 22B just dropped today and I am blown away by how good it is. I was already impressed with Mistral NeMo 12B's abilities, so I didn't know how much better a 22B could be. It passes really tough obscure trivia that NeMo couldn't, and its reasoning abilities are even more refined.

With Mistral Small I have finally reached the plateu of what my hardware can handle for my personal usecase. I need my AI to be able to at least generate around my base reading speed. The lowest I can tolerate is 1.5~T/s lower than that is unacceptable. I really doubted that a 22B could even run on my measly Nvidia GTX 1070 8G VRRAM card and 16GB DDR4 RAM. Nemo ran at about 5.5t/s on this system, so how would Small do?

Mistral Small Q4_KM runs at 2.5T/s with 28 layers offloaded onto VRAM. As context increases that number goes to 1.7T/s. It is absolutely usable for real time conversation needs. I would like the token speed to be faster sure, and have considered going with the lowest Q4 recommended to help balance the speed a little. However, I am very happy just to have it running and actually usable in real time. Its crazy to me that such a seemingly advanced model fits on my modest hardware.

Im a little sad now though, since this is as far as I think I can go in the AI self hosting frontier without investing in a beefier card. Do I need a bigger smarter model than Mistral Small 22B? No. Hell, NeMo was serving me just fine. But now I want to know just how smart the biggest models get. I caught the AI Acquisition Syndrome!

82
 
 
83
 
 

84
 
 

85
 
 

-Eurogamer 5 / 5

-PC Gamer 83 / 100

-TheGamer 4.5 / 5

-God is a Geek 9.5 / 10

-Metro GameCentral 9 / 10

-Digital Trends 4.5 / 5

-Slant Magazine 4 / 5

-Siliconera 9 / 10

86
 
 

87
 
 

-Eurogamer 4 / 5

-IGN 7 / 10

-TheGamer 4 / 5

-GamesRadar+ 4 / 5

-Hardcore Gamer 4 / 5

-God is a Geek 9 / 10

-DualShockers 8 / 10

-Hobby Consolas 87 / 100

88
89
90
91
 
 

-Game Rant 4.5 / 5

-PC Gamer 85 / 100

-IGN 8 / 10

-TheGamer 4.5 / 5

-GameSpot 8 / 10

-God is a Geek 9 / 10

-VideoGamer 8 / 10

-Digital Trends 4 / 5

92
93
94
95
96
97
 
 
98
99
100
5
submitted 18 hours ago* (last edited 11 hours ago) by threelonmusketeers to c/spacex
 
 

Starlink Group 9-17 launch out of SLC-4E in California currently scheduled for 2024-09-19 14:12 UTC, or 2024-09-19 07:12 local time (PDT). Booster [unknown] to land on Of Course I Still Love You.

Webcasts:

view more: ‹ prev next ›