The China Mail - Firms and researchers at odds over superhuman AI

USD -
AED 3.672995
AFN 71.548685
ALL 89.774885
AMD 390.742248
ANG 1.790208
AOA 916.00041
ARS 1074.379902
AUD 1.595705
AWG 1.8
AZN 1.695264
BAM 1.768195
BBD 2.01763
BDT 121.408553
BGN 1.76809
BHD 0.376983
BIF 2969.894223
BMD 1
BND 1.335232
BOB 6.904439
BRL 5.6329
BSD 0.999277
BTN 85.310551
BWP 13.830576
BYN 3.270138
BYR 19600
BZD 2.007233
CAD 1.409035
CDF 2873.00026
CHF 0.855965
CLF 0.024745
CLP 949.55983
CNY 7.28155
CNH 7.255015
COP 4153.75
CRC 503.480698
CUC 1
CUP 26.5
CVE 99.688093
CZK 22.679986
DJF 177.940512
DKK 6.74566
DOP 63.104602
DZD 132.82796
EGP 50.586303
ERN 15
ETB 131.535666
EUR 0.904055
FJD 2.314902
FKP 0.770718
GBP 0.764365
GEL 2.750292
GGP 0.770718
GHS 15.488654
GIP 0.770718
GMD 71.509021
GNF 8647.500226
GTQ 7.712684
GYD 209.058855
HKD 7.777365
HNL 25.566404
HRK 6.8103
HTG 130.756713
HUF 364.720332
IDR 16744.7
ILS 3.702497
IMP 0.770718
INR 85.13835
IQD 1309.013652
IRR 42099.999667
ISK 130.450126
JEP 0.770718
JMD 157.390833
JOD 0.708899
JPY 146.102057
KES 129.160137
KGS 86.711602
KHR 3996.926137
KMF 450.492896
KPW 900.05404
KRW 1441.279882
KWD 0.30766
KYD 0.832746
KZT 500.949281
LAK 21648.13308
LBP 89589.614475
LKR 296.754362
LRD 199.855348
LSL 18.834644
LTL 2.95274
LVL 0.60489
LYD 4.832294
MAD 9.503842
MDL 17.846488
MGA 4557.454118
MKD 55.58416
MMK 2099.453956
MNT 3493.458295
MOP 8.006871
MRU 39.710695
MUR 45.370301
MVR 15.401473
MWK 1732.754724
MXN 19.948597
MYR 4.4205
MZN 63.910237
NAD 18.834644
NGN 1535.589933
NIO 36.768827
NOK 10.34931
NPR 136.4967
NZD 1.74303
OMR 0.385038
PAB 0.999277
PEN 3.669288
PGK 4.122593
PHP 56.859789
PKR 280.290751
PLN 3.822697
PYG 8017.358286
QAR 3.642528
RON 4.501304
RSD 105.925995
RUB 84.067797
RWF 1425.910858
SAR 3.751621
SBD 8.316332
SCR 14.301529
SDG 600.498421
SEK 9.785955
SGD 1.334225
SHP 0.785843
SLE 22.750135
SLL 20969.501083
SOS 571.105687
SRD 36.549874
STD 20697.981008
SVC 8.743332
SYP 13002.701498
SZL 18.841877
THB 34.140285
TJS 10.876865
TMT 3.5
TND 3.05759
TOP 2.342103
TRY 37.955403
TTD 6.775156
TWD 32.942994
TZS 2660.000012
UAH 41.249706
UGX 3641.623723
UYU 42.211373
UZS 12905.704728
VES 70.161515
VND 25805
VUV 123.569394
WST 2.832833
XAF 593.035892
XAG 0.031727
XAU 0.000323
XCD 2.70255
XDR 0.737546
XOF 593.035892
XPF 107.820269
YER 245.649423
ZAR 18.771204
ZMK 9001.256834
ZMW 27.754272
ZWL 321.999592
  • RBGPF

    69.0200

    69.02

    +100%

  • SCS

    -0.7200

    10.74

    -6.7%

  • CMSC

    -0.2400

    22.26

    -1.08%

  • AZN

    1.7000

    73.92

    +2.3%

  • NGG

    3.6100

    69.39

    +5.2%

  • GSK

    1.3700

    39.01

    +3.51%

  • RELX

    0.4600

    51.44

    +0.89%

  • CMSD

    -0.1600

    22.67

    -0.71%

  • RIO

    -1.4700

    58.43

    -2.52%

  • BTI

    1.6700

    41.92

    +3.98%

  • BP

    -2.4700

    31.34

    -7.88%

  • JRI

    -0.2200

    12.82

    -1.72%

  • BCE

    0.8400

    22.66

    +3.71%

  • RYCEF

    -0.0200

    9.78

    -0.2%

  • VOD

    0.2500

    9.37

    +2.67%

  • BCC

    -7.4400

    94.63

    -7.86%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: © AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

A.Sun--ThChM