Rifal

joined 1 year ago
 

OpenAI's lobbying efforts in the European Union are centered around modifying proposed AI regulations that could impact its operations. The tech firm is notably pushing for a weakening of regulations which currently classify certain AI systems, such as OpenAI's GPT-3, as "high risk."

Altman's Stance on AI Regulation:

OpenAI CEO Sam Altman has been very vocal about the need for AI regulation. However, he is advocating for a specific kind of regulation - those favoring OpenAI and its operations.

OpenAI's White Paper:

OpenAI's lobbying efforts in the EU are revealed in a document titled "OpenAI's White Paper on the European Union's Artificial Intelligence Act." The document focuses on attempting to change certain classifications in the proposed AI Act that classify certain AI systems as "high risk."

"High Risk" AI Systems:

The European Commission's "high risk" classification includes systems that could potentially harm health, safety, fundamental rights, or the environment. The Act would require legal human oversight and transparency for such systems. OpenAI, however, argues that its systems such as GPT-3 are not "high risk," but could be used in high-risk use cases. It advocates that regulation should target companies using AI models, not those providing them.

Alignment with Other Tech Giants:

OpenAI's position mirrors that of other tech giants like Microsoft and Google. These companies also lobbied for a weakening of the EU's AI Act regulations.

Outcome of Lobbying Efforts:

The lobbying efforts were successful, as the sections that OpenAI opposed were removed from the final version of the AI Act. This success may explain why Altman reversed a previous threat to pull OpenAI out of the EU over the AI Act.

Source (Mashable)

PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 50+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!

 

A Wharton professor believes that businesses should motivate their employees to share their individual AI-enhanced productivity hacks, despite the prevalent practice of hiding these tactics due to corporate restrictions.

Worker's Use of AI and Secrecy:

  • Employees are increasingly using AI tools, such as OpenAI's ChatGPT, to boost their personal productivity and manage multiple jobs.
  • However, due to strict corporate rules against AI use, these employees often keep their AI usage secret.

Issues with Corporate Restrictions:

  • Companies tend to ban AI tools because of privacy and legal worries.
  • These restrictions result in workers being reluctant to share their AI-driven productivity improvements, fearing potential penalties.
  • Despite the bans, employees often find ways to circumvent these rules, like using their personal devices to access AI tools.

Proposed Incentives for Disclosure:

  • The Wharton professor suggests that companies should incentivize employees to disclose their uses of AI.
  • Proposed incentives could include shorter workdays, making the trade-off beneficial for both employees and the organization.

Anticipated Impact of AI:

  • Generative AI is projected to significantly transform the labor market, particularly affecting white-collar and college-educated workers.
  • As per a Goldman Sachs analysis, this technology could potentially affect 300 million full-time jobs and significantly boost global labor productivity.

Source (Business Insider)

PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 50+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!

 

A Wharton professor believes that businesses should motivate their employees to share their individual AI-enhanced productivity hacks, despite the prevalent practice of hiding these tactics due to corporate restrictions.

Worker's Use of AI and Secrecy:

  • Employees are increasingly using AI tools, such as OpenAI's ChatGPT, to boost their personal productivity and manage multiple jobs.
  • However, due to strict corporate rules against AI use, these employees often keep their AI usage secret.

Issues with Corporate Restrictions:

  • Companies tend to ban AI tools because of privacy and legal worries.
  • These restrictions result in workers being reluctant to share their AI-driven productivity improvements, fearing potential penalties.
  • Despite the bans, employees often find ways to circumvent these rules, like using their personal devices to access AI tools.

Proposed Incentives for Disclosure:

  • The Wharton professor suggests that companies should incentivize employees to disclose their uses of AI.
  • Proposed incentives could include shorter workdays, making the trade-off beneficial for both employees and the organization.

Anticipated Impact of AI:

  • Generative AI is projected to significantly transform the labor market, particularly affecting white-collar and college-educated workers.
  • As per a Goldman Sachs analysis, this technology could potentially affect 300 million full-time jobs and significantly boost global labor productivity.

Source (Business Insider)

PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 50+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!

 

The tech industry is experiencing significant job cuts, driving demand for HR professionals who can manage termination processes well. ChatGPT is being increasingly used to aid these professionals in their difficult tasks.

Layoffs in Tech Industry: Major tech corporations have recently cut jobs, leading to increased need for HR professionals. These individuals are sought after for their ability to handle sensitive termination processes with tact.

  • Tech giants like Google, Meta, and Microsoft have laid off tens of thousands of workers in the past half year.
  • The layoffs have sparked a demand for Human Resources professionals, particularly those skilled in handling termination processes.

HR Professionals and AI Tools: To better manage these difficult termination conversations, HR professionals are leveraging AI tools.

  • Many HR professionals in the tech industry are turning to AI to assist them with challenging tasks.
  • Over 50% of HR professionals in the tech industry have used AI like ChatGPT for training, surveys, performance reviews, recruiting, employee relations, etc.
  • More than 10% of these HR professionals have used ChatGPT to craft employee terminations.

Survey Findings and AI Usage: A recent survey studied the experiences of tech HR professionals and tech employees with HR in the industry, revealing extensive AI use.

  • The survey involved 213 tech HR professionals and 792 tech employees.
  • The findings suggest an increasing reliance on AI tools, especially ChatGPT, for diverse HR tasks, including crafting terminations.

Implications of AI Use: Despite its convenience, using AI in sensitive situations like employee termination can lead to potential trust issues.

  • AI chatbots, like ChatGPT, allow users to emotionally detach from difficult situations such as job termination.
  • However, using AI for these purposes could result in decreased trust between employees and HR professionals.

Previous Use of ChatGPT: ChatGPT has been used for a variety of sensitive matters in the past, such as writing wedding vows and eulogies.

  • ChatGPT's use is not limited to HR-related tasks; it has previously been used to write wedding vows and eulogies.
  • This illustrates the versatility of AI tools in dealing with emotionally charged situations.

Source (ZDnet)

PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 40+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!