40
submitted 1 week ago by [email protected] to c/[email protected]
you are viewing a single comment's thread
view the rest of the comments
[-] [email protected] 11 points 1 week ago

Provided you have humans overseeing the summaries

right, at which point you're just better doing it the right way from the beginning, not to mention such tiny detail as not shoving classified information into sam altman's black box

[-] conciselyverbose 9 points 1 week ago* (last edited 1 week ago)

I'm not really arguing the merit, just answering how I'm reading the article.

The systems are airgapped and never exfiltrate information so that shouldn't really be a concern.

Humans are also a potential liability to a classified operation. If you can get the same results with 2 human analysts overseeing/supplementing the work of AI as you would with 2 human analysts overseeing/supplementing 5 junior people, it's worth evaluating. You absolutely should never be blindly trusting an LLM for anything. They're not intelligent. But they can be used as a tool by capable people to increase their effectiveness.

[-] [email protected] 6 points 1 week ago

it's not airgapped, it's still cloud, it can't be. it's some kind of "secure" cloud that passed some kind of audit. openai already had a breach or a few, so i'm not entirely sure it will pan out

[-] [email protected] 0 points 1 week ago

I see you've never taken part in a FedRAMP audit. They're brutal.

load more comments (5 replies)
load more comments (6 replies)
load more comments (6 replies)
this post was submitted on 06 Jul 2024
40 points (100.0% liked)

TechTakes

1095 readers
133 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS