下載App 希平方
攻其不背
App 開放下載中
下載App 希平方
攻其不背
App 開放下載中
IE版本不足
您的瀏覽器停止支援了😢使用最新 Edge 瀏覽器或點選連結下載 Google Chrome 瀏覽器 前往下載

免費註冊
! 這組帳號已經註冊過了
Email 帳號
密碼請填入 6 位數以上密碼
已經有帳號了?
忘記密碼
! 這組帳號已經註冊過了
您的 Email
請輸入您註冊時填寫的 Email,
我們將會寄送設定新密碼的連結給您。
寄信了!請到信箱打開密碼連結信
密碼信已寄至
沒有收到信嗎?
如果您尚未收到信,請前往垃圾郵件查看,謝謝!

恭喜您註冊成功!

查看會員功能

註冊未完成

《HOPE English 希平方》服務條款關於個人資料收集與使用之規定

隱私權政策
上次更新日期:2014-12-30

希平方 為一英文學習平台,我們每天固定上傳優質且豐富的影片內容,讓您不但能以有趣的方式學習英文,還能增加內涵,豐富知識。我們非常注重您的隱私,以下說明為當您使用我們平台時,我們如何收集、使用、揭露、轉移及儲存你的資料。請您花一些時間熟讀我們的隱私權做法,我們歡迎您的任何疑問或意見,提供我們將產品、服務、內容、廣告做得更好。

本政策涵蓋的內容包括:希平方學英文 如何處理蒐集或收到的個人資料。
本隱私權保護政策只適用於: 希平方學英文 平台,不適用於非 希平方學英文 平台所有或控制的公司,也不適用於非 希平方學英文 僱用或管理之人。

個人資料的收集與使用
當您註冊 希平方學英文 平台時,我們會詢問您姓名、電子郵件、出生日期、職位、行業及個人興趣等資料。在您註冊完 希平方學英文 帳號並登入我們的服務後,我們就能辨認您的身分,讓您使用更完整的服務,或參加相關宣傳、優惠及贈獎活動。希平方學英文 也可能從商業夥伴或其他公司處取得您的個人資料,並將這些資料與 希平方學英文 所擁有的您的個人資料相結合。

我們所收集的個人資料, 將用於通知您有關 希平方學英文 最新產品公告、軟體更新,以及即將發生的事件,也可用以協助改進我們的服務。

我們也可能使用個人資料為內部用途。例如:稽核、資料分析、研究等,以改進 希平方公司 產品、服務及客戶溝通。

瀏覽資料的收集與使用
希平方學英文 自動接收並記錄您電腦和瀏覽器上的資料,包括 IP 位址、希平方學英文 cookie 中的資料、軟體和硬體屬性以及您瀏覽的網頁紀錄。

隱私權政策修訂
我們會不定時修正與變更《隱私權政策》,不會在未經您明確同意的情況下,縮減本《隱私權政策》賦予您的權利。隱私權政策變更時一律會在本頁發佈;如果屬於重大變更,我們會提供更明顯的通知 (包括某些服務會以電子郵件通知隱私權政策的變更)。我們還會將本《隱私權政策》的舊版加以封存,方便您回顧。

服務條款
歡迎您加入看 ”希平方學英文”
上次更新日期:2013-09-09

歡迎您加入看 ”希平方學英文”
感謝您使用我們的產品和服務(以下簡稱「本服務」),本服務是由 希平方學英文 所提供。
本服務條款訂立的目的,是為了保護會員以及所有使用者(以下稱會員)的權益,並構成會員與本服務提供者之間的契約,在使用者完成註冊手續前,應詳細閱讀本服務條款之全部條文,一旦您按下「註冊」按鈕,即表示您已知悉、並完全同意本服務條款的所有約定。如您是法律上之無行為能力人或限制行為能力人(如未滿二十歲之未成年人),則您在加入會員前,請將本服務條款交由您的法定代理人(如父母、輔助人或監護人)閱讀,並得到其同意,您才可註冊及使用 希平方學英文 所提供之會員服務。當您開始使用 希平方學英文 所提供之會員服務時,則表示您的法定代理人(如父母、輔助人或監護人)已經閱讀、了解並同意本服務條款。 我們可能會修改本條款或適用於本服務之任何額外條款,以(例如)反映法律之變更或本服務之變動。您應定期查閱本條款內容。這些條款如有修訂,我們會在本網頁發佈通知。變更不會回溯適用,並將於公布變更起十四天或更長時間後方始生效。不過,針對本服務新功能的變更,或基於法律理由而為之變更,將立即生效。如果您不同意本服務之修訂條款,則請停止使用該本服務。

第三人網站的連結 本服務或協力廠商可能會提供連結至其他網站或網路資源的連結。您可能會因此連結至其他業者經營的網站,但不表示希平方學英文與該等業者有任何關係。其他業者經營的網站均由各該業者自行負責,不屬希平方學英文控制及負責範圍之內。

兒童及青少年之保護 兒童及青少年上網已經成為無可避免之趨勢,使用網際網路獲取知識更可以培養子女的成熟度與競爭能力。然而網路上的確存有不適宜兒童及青少年接受的訊息,例如色情與暴力的訊息,兒童及青少年有可能因此受到心靈與肉體上的傷害。因此,為確保兒童及青少年使用網路的安全,並避免隱私權受到侵犯,家長(或監護人)應先檢閱各該網站是否有保護個人資料的「隱私權政策」,再決定是否同意提出相關的個人資料;並應持續叮嚀兒童及青少年不可洩漏自己或家人的任何資料(包括姓名、地址、電話、電子郵件信箱、照片、信用卡號等)給任何人。

為了維護 希平方學英文 網站安全,我們需要您的協助:

您承諾絕不為任何非法目的或以任何非法方式使用本服務,並承諾遵守中華民國相關法規及一切使用網際網路之國際慣例。您若係中華民國以外之使用者,並同意遵守所屬國家或地域之法令。您同意並保證不得利用本服務從事侵害他人權益或違法之行為,包括但不限於:
A. 侵害他人名譽、隱私權、營業秘密、商標權、著作權、專利權、其他智慧財產權及其他權利;
B. 違反依法律或契約所應負之保密義務;
C. 冒用他人名義使用本服務;
D. 上載、張貼、傳輸或散佈任何含有電腦病毒或任何對電腦軟、硬體產生中斷、破壞或限制功能之程式碼之資料;
E. 干擾或中斷本服務或伺服器或連結本服務之網路,或不遵守連結至本服務之相關需求、程序、政策或規則等,包括但不限於:使用任何設備、軟體或刻意規避看 希平方學英文 - 看 YouTube 學英文 之排除自動搜尋之標頭 (robot exclusion headers);

服務中斷或暫停
本公司將以合理之方式及技術,維護會員服務之正常運作,但有時仍會有無法預期的因素導致服務中斷或故障等現象,可能將造成您使用上的不便、資料喪失、錯誤、遭人篡改或其他經濟上損失等情形。建議您於使用本服務時宜自行採取防護措施。 希平方學英文 對於您因使用(或無法使用)本服務而造成的損害,除故意或重大過失外,不負任何賠償責任。

版權宣告
上次更新日期:2013-09-16

希平方學英文 內所有資料之著作權、所有權與智慧財產權,包括翻譯內容、程式與軟體均為 希平方學英文 所有,須經希平方學英文同意合法才得以使用。
希平方學英文歡迎你分享網站連結、單字、片語、佳句,使用時須標明出處,並遵守下列原則:

  • 禁止用於獲取個人或團體利益,或從事未經 希平方學英文 事前授權的商業行為
  • 禁止用於政黨或政治宣傳,或暗示有支持某位候選人
  • 禁止用於非希平方學英文認可的產品或政策建議
  • 禁止公佈或傳送任何誹謗、侮辱、具威脅性、攻擊性、不雅、猥褻、不實、色情、暴力、違反公共秩序或善良風俗或其他不法之文字、圖片或任何形式的檔案
  • 禁止侵害或毀損希平方學英文或他人名譽、隱私權、營業秘密、商標權、著作權、專利權、其他智慧財產權及其他權利、違反法律或契約所應付支保密義務
  • 嚴禁謊稱希平方學英文辦公室、職員、代理人或發言人的言論背書,或作為募款的用途

網站連結
歡迎您分享 希平方學英文 網站連結,與您的朋友一起學習英文。

抱歉傳送失敗!

不明原因問題造成傳送失敗,請儘速與我們聯繫!
希平方 x ICRT

「Iyad Rahwan:無人駕駛車應該做哪些道德抉擇?」- What Moral Decisions Should Driverless Cars Make?

觀看次數:2424  • 

框選或點兩下字幕可以直接查字典喔!

Today I'm going to talk about technology and society. The Department of Transport estimated that last year 35,000 people died from traffic crashes in the US alone. Worldwide, 1.2 million people die every year in traffic accidents. If there was a way we could eliminate 90 percent of those accidents, would you support it? Of course you would. This is what driverless car technology promises to achieve by eliminating the main source of accidents—human error.

Now picture yourself in a driverless car in the year 2030, sitting back and watching this vintage TEDxCambridge video.

All of a sudden, the car experiences mechanical failure and is unable to stop. If the car continues, it will crash into a bunch of pedestrians crossing the street, but the car may swerve, hitting one bystander, killing them to save the pedestrians. What should the car do, and who should decide? What if instead the car could swerve into a wall, crashing and killing you, the passenger, in order to save those pedestrians? This scenario is inspired by the trolley problem, which was invented by philosophers a few decades ago to think about ethics.

Now, the way we think about this problem matters. We may, for example, not think about it at all. We may say this scenario is unrealistic, incredibly unlikely, or just silly. But I think this criticism misses the point because it takes the scenario too literally. Of course no accident is going to look like this; no accident has two or three options where everybody dies somehow. Instead, the car is going to calculate something like the probability of hitting a certain group of people, if you swerve one direction versus another direction, you might slightly increase the risk to passengers or other drivers versus pedestrians. It's going to be a more complex calculation, but it's still going to involve trade-offs, and trade-offs often require ethics.

We might say then, "Well, let's not worry about this. Let's wait until technology is fully ready and 100 percent safe." Suppose that we can indeed eliminate 90 percent of those accidents, or even 99 percent in the next 10 years. What if eliminating the last one percent of accidents requires 50 more years of research? Should we not adopt the technology? That's 60 million people dead in car accidents if we maintain the current rate. So the point is, waiting for full safety is also a choice, and it also involves trade-offs.

Now, people online on social media have been coming up with all sorts of ways to not think about this problem. One person suggested the car should just swerve somehow in between the passengers—and the bystander. Of course if that's what the car can do, that's what the car should do. We're interested in scenarios in which this is not possible. And my personal favorite was a suggestion by a blogger to have an eject button in the car that you press—just before the car self-destructs.

So if we acknowledge that cars will have to make trade-offs on the road, how do we think about those trade-offs, and how do we decide? Well, maybe we should run a survey to find out what society wants, because ultimately, regulations and the law are a reflection of societal values.

So this is what we did. With my collaborators, Jean-François Bonnefon and Azim Shariff, we ran a survey in which we presented people with these types of scenarios. We gave them two options inspired by two philosophers: Jeremy Bentham and Immanuel Kant. Bentham says the car should follow utilitarian ethics: it should take the action that will minimize total harm—even if that action will kill a bystander and even if that action will kill the passenger. Immanuel Kant says the car should follow duty-bound principles, like "Thou shalt not kill." So you should not take an action that explicitly harms a human being, and you should let the car take its course even if that's going to harm more people.

What do you think? Bentham or Kant? Here's what we found. Most people sided with Bentham. So it seems that people want cars to be utilitarian, minimize total harm, and that's what we should all do. Problem solved. But there is a little catch. When we asked people whether they would purchase such cars, they said, "Absolutely not." They would like to buy cars that protect them at all costs, but they want everybody else to buy cars that minimize harm.

We've seen this problem before. It's called a social dilemma. And to understand the social dilemma, we have to go a little bit back in history. In the 1800s, English economist William Forster Lloyd published a pamphlet which describes the following scenario. You have a group of farmers—English farmers—who are sharing a common land for their sheep to graze. Now, if each farmer brings a certain number of sheep—let's say three sheep—the land will be rejuvenated, the farmers are happy, the sheep are happy, everything is good. Now, if one farmer brings one extra sheep, that farmer will do slightly better, and no one else will be harmed. But if every farmer made that individually rational decision, the land will be overrun, and it will be depleted to the detriment of all the farmers, and of course, to the detriment of the sheep.

We see this problem in many places: in the difficulty of managing overfishing, or in reducing carbon emissions to mitigate climate change. When it comes to the regulation of driverless cars, the common land now is basically public safety—that's the common good—and the farmers are the passengers or the car owners who are choosing to ride in those cars. And by making the individually rational choice of prioritizing their own safety, they may collectively be diminishing the common good, which is minimizing total harm. It's called the tragedy of the commons, traditionally, but I think in the case of driverless cars, the problem may be a little bit more insidious because there is not necessarily an individual human being making those decisions. So car manufacturers may simply program cars that will maximize safety for their clients, and those cars may learn automatically on their own that doing so requires slightly increasing risk for pedestrians. So to use the sheep metaphor, it's like we now have electric sheep that have a mind of their own. And they may go and graze even if the farmer doesn't know it.

So this is what we may call the tragedy of the algorithmic commons, and if offers new types of challenges. Typically, traditionally, we solve these types of social dilemmas using regulation, so either governments or communities get together, and they decide collectively what kind of outcome they want and what sort of constraints on individual behavior they need to implement. And then using monitoring and enforcement, they can make sure that the public good is preserved. So why don't we just, as regulators, require that all cars minimize harm? After all, this is what people say they want. And more importantly, I can be sure that as an individual, if I buy a car that may sacrifice me in a very rare case, I'm not the only sucker doing that while everybody else enjoys unconditional protection.

In our survey, we did ask people whether they would support regulation and here's what we found. First of all, people said no to regulation; and second, they said, "Well if you regulate cars to do this and to minimize total harm, I will not buy those cars." So ironically, by regulating cars to minimize harm, we may actually end up with more harm because people may not opt into the safer technology even if it's much safer than human drivers.

I don't have the final answer to this riddle, but I think as a starting point, we need society to come together to decide what trade-offs we are comfortable with and to come up with ways in which we can enforce those trade-offs.

As a starting point, my brilliant students, Edmond Awad and Sohan Dsouza, built the Moral Machine website, which generates random scenarios at you—basically a bunch of random dilemmas in a sequence where you have to choose what the car should do in a given scenario. And we vary the ages and even the species of the different victims. So far we've collected over five million decisions y over one million people worldwide from the website. And this is helping us form an early picture of what trade-offs people are comfortable with and what matters to them—even across cultures. But more importantly, doing this exercise is helping people recognize the difficulty of making those choices and that the regulators are tasked with impossible choices. And maybe this will help us as a society understand the kinds of trade-offs that will be implemented ultimately in regulation.

And indeed, I was very happy to hear that the first set of regulations that came from the Department of Transport—announced last week—included a 15-point checklist for all carmakers to provide, and number 14 was ethical consideration—how are you going to deal with that. We also have people reflect on their own decisions by giving them summaries of what they chose. I'll give you one example—I'm just going to warn you that this is not your typical example, your typical user. This is the most sacrificed and the most saved character for this person.

Some of you may agree with him, or her, we don't know. But this person also seems to slightly prefer passengers over pedestrians in their choices and is very happy to punish jaywalking.

So let's wrap up. We started with the question—let's call it the ethical dilemma—of what the car should do in a specific scenario: swerve or stay? But then we realized that the problem was a different one. It was the problem of how to get society to agree on and enforce the trade-offs they're comfortable with. It's a social dilemma.

In the 1940s, Isaac Asimov wrote his famous laws of robotics—the three laws of robotics. A robot may not harm a human being, a robot may not disobey a human being, and a robot may not allow itself to come to harm—in this order of importance. But after 40 years or so and after so many stories pushing these laws to the limit, Asimov introduced the zeroth law which takes precedence above all, and it's that a robot may not harm humanity as a whole. Now, I don't know what this means in the context of driverless cars or any specific situation, and I don't know how we can implement it, but I think that by recognizing that the regulation of driverless cars is not only a technological problem but also a societal cooperation problem, I hope that we can at least begin to ask the right questions.

Thank you.

播放本句

登入使用學習功能

使用Email登入

HOPE English 播放器使用小提示

  • 功能簡介

    單句重覆、重複上一句、重複下一句:以句子為單位重覆播放,單句重覆鍵顯示綠色時為重覆播放狀態;顯示白色時為正常播放狀態。按重複上一句、重複下一句時就會自動重覆播放該句。
    收錄佳句:點擊可增減想收藏的句子。

    中、英文字幕開關:中、英文字幕按鍵為綠色為開啟,灰色為關閉。鼓勵大家搞懂每一句的內容以後,關上字幕聽聽看,會發現自己好像在聽中文說故事一樣,會很有成就感喔!
    收錄單字:框選英文單字可以收藏不會的單字。
  • 分享
    如果您有收錄很優秀的句子時,可以分享佳句給大家,一同看佳句學英文!