this post was submitted on 29 Feb 2024
80 points (85.1% liked)

Asklemmy

43336 readers
801 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
 

Edit: After reading the discussion, I figured I'd let GPT4 speak for itself:

The quest to identify true artificial intelligence (AI) indeed presents challenges, especially as AI models become more sophisticated. Let's explore some indicators that researchers and practitioners consider when assessing AI systems:

  1. Understanding Context and Meaning:

    • True AI should demonstrate an understanding of context and meaning. It should not merely generate plausible-sounding sentences but also comprehend the content it produces.
    • A system that can engage in nuanced conversations, infer intent, and provide relevant responses based on context would be a strong indicator of advanced AI.
  2. Generalization and Adaptability:

    • An AI system that can generalize from its training data and adapt to novel situations is valuable.
    • True AI should not be limited to memorizing specific examples but should exhibit flexibility in handling diverse scenarios.
  3. Creativity and Novelty:

    • Creativity is a hallmark of human intelligence. An AI system that generates novel ideas, art, or solutions beyond its training data suggests progress.
    • For instance, AI-generated poetry, music, or visual art that resonates with human emotions could be an indicator.
  4. Ethical Decision-Making:

    • True AI should navigate ethical dilemmas. It should consider consequences, fairness, and societal impact.
    • An AI system that can weigh trade-offs and make ethical choices aligning with human values is a positive sign.
  5. Self-Awareness and Self-Improvement:

    • While self-awareness akin to human consciousness remains elusive, an AI system that can self-assess its performance and seek improvement is promising.
    • Continuous learning, adaptation, and self-correction are essential traits.
  6. Transparency and Explainability:

    • True AI should be transparent about its decision-making process. It should provide explanations for its outputs.
    • Efforts to develop interpretable AI models and techniques for explaining their predictions contribute to trustworthiness.
  7. Robustness and Safety:

    • AI systems must handle adversarial inputs, noisy data, and edge cases gracefully.
    • Robustness against attacks and safety precautions to prevent harmful behavior are critical indicators.
  8. Human-AI Collaboration:

    • Rather than replacing humans, true AI should complement human abilities.
    • Systems that collaborate effectively with humans—such as AI assistants, co-creative tools, or medical diagnostics—demonstrate progress.
  9. Quantifiable Metrics:

    • Researchers propose various metrics to assess AI capabilities, such as perplexity, BLEU score, F1 score, and accuracy.
    • However, no single metric captures the full spectrum of intelligence.
  10. Detection Tools:

    • Researchers are developing tools to differentiate between human- and AI-generated text.
    • These tools analyze features like fluency, word frequency, punctuation patterns, and sentence length¹.
    • While imperfect, they contribute to identifying AI-generated content.

In summary, the arrival of true AI may not be a singular event but a gradual evolution marked by advancements in the above dimensions. As AI continues to progress, our understanding of its capabilities will evolve, and new indicators may emerge.

Source: Conversation with Bing, 2/29/2024 (1) How to spot AI-generated text | MIT Technology Review. https://www.technologyreview.com/2022/12/19/1065596/how-to-spot-ai-generated-text/. (2) Intelligent Supertrend (AI) - Buy or Sell Signal — Indicator by .... https://www.tradingview.com/script/q9244PAH-Intelligent-Supertrend-AI-Buy-or-Sell-Signal/. (3) Indicators - True ALGO. https://truealgo.com/indicators/. (4) Improve Key Performance Indicators With AI - MIT Sloan Management Review. https://sloanreview.mit.edu/article/improve-key-performance-indicators-with-ai/. (5) New AI classifier for indicating AI-written text - OpenAI. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/.

(page 2) 15 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 6 months ago

Probably, understanding current topical humour, sarcasm and hyperbole.

These are some general areas where the machine peeks through, to give an illusion breaker.

[–] [email protected] 1 points 6 months ago (4 children)

People are in denial about AI because it is scary and people have no mental discipline.

AI is here. Anyone who disagrees please present me with a text processing task that a “real AI” could do but an LLM cannot.

The Turing test is the best we’ve got, and when a machine passes the turing test there is no reason whatsoever to consider it not to be intelligent.

I’m serious about this. It’s not logic that people are using. It’s motivated reasoning. People are afraid of AI (with good reason). It is that fear which makes them conclude AI is still far away, not any kind of rational evaluation.

The Turing test was perfectly valid until machines started passing the Turing test upon which people immediately discredited the test.

They’re just doing what people in horror movies are doing when they say “No! It can’t be”. The mind attempts to reject what it cannot handle.

[–] [email protected] 2 points 6 months ago

A "real AI" should be able to do self improvement, and LLM's can't do that. Yes, they could make their own code neater, or take up less space, or add features, but they can't do any of that without being instructed. A "real AI" could write a story on its own, but LLMs can't, they can only do what they are asked. Yes, you could write the code to output text at random, but then the human is still the impetus for the action.

"Real AI" should be capable of independent thought, action, and desires.

[–] [email protected] 1 points 6 months ago (1 children)

Anyone who disagrees please present me with a text processing task that a “real AI” could do but an LLM cannot.

Describe this photo without non-sense mixed in. a black puppy at the driver seat

ChatGPT description with highlights on what's wrong

Gemini description with highlights on what's wrong

[–] [email protected] 1 points 6 months ago

I know this is not purely text processing but my argument is that there's no "true" understanding on these tools. It's made to look like it have, is useful for sure, but it's not real intelligence.

load more comments (1 replies)
[–] [email protected] 1 points 6 months ago

The treering test?

[–] [email protected] -2 points 6 months ago

Ability to act on freewill

You ask Chat GPT a question it is going to answer it becomes that's what it has been programed to do. Input question, output answer.

Now if Chat GPT could be like "Nah I'm not going to answer that because I don't feel like it"

Yes "AI" can be programed to not answer certain things. E.g porn stuff. But it does not make the conscious choice to do so it is following programming.

load more comments
view more: ‹ prev next ›