news-uk

Claude’s Condition Worsens, Reports Claude

Anthropic’s AI assistant, Claude, is currently facing significant challenges regarding its performance and reliability. Recently, the service experienced a major outage that lasted nearly an hour, from 15:31 to 16:19 UTC. During this period, users noticed elevated error rates, raising concerns about the platform’s overall stability.

Growing Quality Concerns for Claude

In recent months, users have reported a decline in the quality of responses from Claude. Feedback on social media and GitHub indicates increasing dissatisfaction with its performance. This trend comes as Anthropic has implemented measures to manage demand during peak usage periods.

Data Analysis and Complaints

To understand these quality concerns quantitatively, Claude was tasked with analyzing open issues related to its performance from the Claude Code GitHub repository. The findings suggest a notable increase in complaints. For instance, by April 2026, there were already over 20 quality issues reported within just 13 days, potentially surpassing the 18 issues logged in March.

  • April 2026: 20+ quality issues within 13 days
  • March 2026: 18 quality issues
  • January-February 2026: Baseline for reported issues

Challenges in Reporting

Despite these reports, it’s crucial to note that not all complaints may be valid. Some issues appear to have been generated by AI, leading to questions about the accuracy and reliability of the feedback. Furthermore, the GitHub Actions script employed by Anthropic may automatically close unresolved issues, obscuring problems that remain unaddressed.

Specific Issues Identified

Among the notable complaints cited by Claude are the following:

  • Quality degradation in complex engineering tasks (#42796)
  • Concerns about prediction-first behavior impacting risky projects (#46212)
  • Issues with compute throttling affecting paid users (#46949)
  • Severe quality decline in iterative coding tasks with Opus 4.6 (#46099)

While these complaints highlight growing concerns, some external assessments, such as those from Margin Lab, indicate that Claude Opus 4.6 has maintained a consistent performance score on the SWE-Bench-Pro test since February.

Conclusion

As Anthropic continues to address the challenges facing Claude, the company has yet to provide an official statement regarding the growing issues reported by users. The situation underscores the complexities associated with AI reliability and the need for careful monitoring and response to customer feedback.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button