Text 3
This year marks exactly two centuries since the publication of Frankenstein; or, The Modern
Prometheus, by Mary Shelley. Even before the invention of the electric light bulb, the author
45
produced a remarkable work of speculative fiction that would foreshadow many ethical questions
to be raised by technologies yet to come.
Today the rapid growth of artificial intelligence (AI) raises fundamental questions: “What is
intelligence, identity, or consciousness? What makes humans humans?”
What is being called artificial general intelligence, machines that would imitate the way
humans think, continues to evade scientists. Yet humans remain fascinated by the idea of robots
that would look, move, and respond like humans, similar to those recently depicted on popular sci
fi TV series such as “Westworld” and “Humans”.
Just how people think is still far too complex to be understood, let alone reproduced, says
David Eagleman, a Stanford University neuroscientist. “We are just in a situation where there are
no good theories explaining what consciousness actually is and how you could ever build a
machine to get there.”
But that doesn’t mean crucial ethical issues involving AI aren’t at hand. The coming use of
autonomous vehicles, for example, poses thorny ethical questions. Human drivers sometimes must
make split-second decisions. Their reactions may be a complex combination of instant reflexes,
input from past driving experiences, and what their eyes and ears tell them in that moment. AI
“vision” today is not nearly as sophisticated as that of humans. And to anticipate every imaginable
driving situation is a difficult programming problem.
Whenever decisions are based on masses of data, “you quickly get into a lot of ethical
questions,” notes Tan Kiat How, chief executive of a Singapore-based agency that is helping the
government develop a voluntary code for the ethical use of AI. Along with Singapore, other
governments and mega-corporations are beginning to establish their own guidelines. Britain is
setting up a data ethics center. India released its AI ethics strategy this spring.
On June 7 Google pledged not to “design or deploy AI” that would cause “overall harm,” or
to develop AI-directed weapons or use AI for surveillance that would violate international norms.
It also pledged not to deploy AI whose use would violate international laws or human rights.
While the statement is vague, it represents one starting point. So does the idea that decisions
made by AI systems should be explainable, transparent, and fair.
To put it another way: How can we make sure that the thinking of intelligent machines
reflects humanity’s highest values? Only then will they be useful servants and not Frankenstein’s
out-of-control monster.
31. Mary Shelley’s novel Frankenstein is mentioned because it ________.
[A] fascinates AI scientists all over the world
[B] has remained popular for as long as 200 years
[C] involves some concerns raised by AI today
[D] has sparked serious ethical controversies
32.
In David Eagleman’s opinion, our current knowledge of consciousness ________.
[A] helps explain artificial intelligence
[B] can be misleading to robot making
[C] inspires popular sci-fi TV series
[D] is too limited for us to reproduce it
33.
The solution to the ethical issues brought by autonomous vehicles ________.
[A] can hardly ever be found [B] is still beyond our capacity
[C] causes little public concern
[D] has aroused much curiosity
34. The author’s attitude toward Google’s pledges is one of ________.
[A] affirmation
[B] skepticism
[C] contempt
[D] respect
35. Which of the following would be the best title for the text?
[A] AI’s Future: In the Hands of Tech Giants.
[B] Frankenstein, the Novel Predicting the Age of AI.
[C] The Conscience of AI: Complex But Inevitable.
[D] AI Shall Be Killers Once Out of Control.
31. 【答案】 C
【解析】 根据题干信息词“Mary Shelley’s novel Frankenstein”,可回文定位至第一
段。结合第一段最后一句中的“would foreshadow many ethical questions to be raised by
technologies yet to come”可知,之所以提到这部小说,是因为它涉及到了未来技术所
带来的种种伦理问题,故[C]符合题意。故选[C]。
32. 【答案】 D
【解析】 根据题干中的“David Eagleman”,可回文定位至第四段第一句“how people
think is still far too complex to be understood, let alone reproduced”并且根据该段第二句
中的“We are just in a situation where there are no good theories explaining what
consciousness actually is”可知,我们没有解释什么是 consciousness 的理论,两句相结
合,可知我们现有的知识是有限的,不足以再创造,与[D]吻合。故选[D]。
33. 【答案】 B
【解析】 根据题干中的关键词“autonomous vehicles”,可回文定位到第五段。根据该
段第一、二句,“But that doesn’t mean crucial ethical issues involving AI aren’t at hand.
The coming use of autonomous vehicles, for example, poses thorny ethical questions”,可知
涉及人工智能的伦理问题就在我们身边,随后举出了自动驾驶汽车的案例。换句话
说,自动驾驶汽车带来的伦理问题依然存在。[B]与题干结合含义为:找出由自动驾驶
汽车带来的人工智能伦理问题的解决方法超出了我们的能力,与文章含义一致。故[B]
为正确答案。[A]过于绝对,而且从第六段可以看出,不同国家、公司都在找解决问题
的方法;[C][D]中的 public concern 和 curiosity 在文中未涉及。故选[B]。
34. 【答案】 A
【解析】 根据题干中的关键词 the author’s attitude 及 Google’s pledges,可回文定位至
第七、八段。第七段主要围绕 Google’s pledge,而第八段开头就表明“While the
statement is vague, it represents one starting point”,即虽然 Google 的 statement 很模糊,
但它代表了新的开始,while 表示转折。由此可见,作者对 Google’s pledge 是持肯定态
度的,故[A]符合题意。故选[A]。
35. 【答案】 C
【解析】 根据题干中的 best title 可知,本题考查的是文章的主旨大意。本文首先通过
引用 Mary Shelley 的书引出了本文要探讨的话题,即新技术带来的道德问题。之后对
AI 引发的伦理问题进行了详细阐述,表明我们不能确定智能化的程度是因为无法定义
17人类的自我意识,随后又明确指出 AI 所涉及的道德问题已近在咫尺,不可阻挡。结合
所有选项可知,[C]最能概括全文。故选[C]。