下載App 希平方
攻其不背
App 開放下載中
下載App 希平方
攻其不背
App 開放下載中
IE版本不足
您的瀏覽器停止支援了😢使用最新 Edge 瀏覽器或點選連結下載 Google Chrome 瀏覽器 前往下載

免費註冊
! 這組帳號已經註冊過了
Email 帳號
密碼請填入 6 位數以上密碼
已經有帳號了?
忘記密碼
! 這組帳號已經註冊過了
您的 Email
請輸入您註冊時填寫的 Email,
我們將會寄送設定新密碼的連結給您。
寄信了!請到信箱打開密碼連結信
密碼信已寄至
沒有收到信嗎?
如果您尚未收到信,請前往垃圾郵件查看,謝謝!

恭喜您註冊成功!

查看會員功能

註冊未完成

《HOPE English 希平方》服務條款關於個人資料收集與使用之規定

隱私權政策
上次更新日期:2014-12-30

希平方 為一英文學習平台,我們每天固定上傳優質且豐富的影片內容,讓您不但能以有趣的方式學習英文,還能增加內涵,豐富知識。我們非常注重您的隱私,以下說明為當您使用我們平台時,我們如何收集、使用、揭露、轉移及儲存你的資料。請您花一些時間熟讀我們的隱私權做法,我們歡迎您的任何疑問或意見,提供我們將產品、服務、內容、廣告做得更好。

本政策涵蓋的內容包括:希平方學英文 如何處理蒐集或收到的個人資料。
本隱私權保護政策只適用於: 希平方學英文 平台,不適用於非 希平方學英文 平台所有或控制的公司,也不適用於非 希平方學英文 僱用或管理之人。

個人資料的收集與使用
當您註冊 希平方學英文 平台時,我們會詢問您姓名、電子郵件、出生日期、職位、行業及個人興趣等資料。在您註冊完 希平方學英文 帳號並登入我們的服務後,我們就能辨認您的身分,讓您使用更完整的服務,或參加相關宣傳、優惠及贈獎活動。希平方學英文 也可能從商業夥伴或其他公司處取得您的個人資料,並將這些資料與 希平方學英文 所擁有的您的個人資料相結合。

我們所收集的個人資料, 將用於通知您有關 希平方學英文 最新產品公告、軟體更新,以及即將發生的事件,也可用以協助改進我們的服務。

我們也可能使用個人資料為內部用途。例如:稽核、資料分析、研究等,以改進 希平方公司 產品、服務及客戶溝通。

瀏覽資料的收集與使用
希平方學英文 自動接收並記錄您電腦和瀏覽器上的資料,包括 IP 位址、希平方學英文 cookie 中的資料、軟體和硬體屬性以及您瀏覽的網頁紀錄。

隱私權政策修訂
我們會不定時修正與變更《隱私權政策》,不會在未經您明確同意的情況下,縮減本《隱私權政策》賦予您的權利。隱私權政策變更時一律會在本頁發佈;如果屬於重大變更,我們會提供更明顯的通知 (包括某些服務會以電子郵件通知隱私權政策的變更)。我們還會將本《隱私權政策》的舊版加以封存,方便您回顧。

服務條款
歡迎您加入看 ”希平方學英文”
上次更新日期:2013-09-09

歡迎您加入看 ”希平方學英文”
感謝您使用我們的產品和服務(以下簡稱「本服務」),本服務是由 希平方學英文 所提供。
本服務條款訂立的目的,是為了保護會員以及所有使用者(以下稱會員)的權益,並構成會員與本服務提供者之間的契約,在使用者完成註冊手續前,應詳細閱讀本服務條款之全部條文,一旦您按下「註冊」按鈕,即表示您已知悉、並完全同意本服務條款的所有約定。如您是法律上之無行為能力人或限制行為能力人(如未滿二十歲之未成年人),則您在加入會員前,請將本服務條款交由您的法定代理人(如父母、輔助人或監護人)閱讀,並得到其同意,您才可註冊及使用 希平方學英文 所提供之會員服務。當您開始使用 希平方學英文 所提供之會員服務時,則表示您的法定代理人(如父母、輔助人或監護人)已經閱讀、了解並同意本服務條款。 我們可能會修改本條款或適用於本服務之任何額外條款,以(例如)反映法律之變更或本服務之變動。您應定期查閱本條款內容。這些條款如有修訂,我們會在本網頁發佈通知。變更不會回溯適用,並將於公布變更起十四天或更長時間後方始生效。不過,針對本服務新功能的變更,或基於法律理由而為之變更,將立即生效。如果您不同意本服務之修訂條款,則請停止使用該本服務。

第三人網站的連結 本服務或協力廠商可能會提供連結至其他網站或網路資源的連結。您可能會因此連結至其他業者經營的網站,但不表示希平方學英文與該等業者有任何關係。其他業者經營的網站均由各該業者自行負責,不屬希平方學英文控制及負責範圍之內。

兒童及青少年之保護 兒童及青少年上網已經成為無可避免之趨勢,使用網際網路獲取知識更可以培養子女的成熟度與競爭能力。然而網路上的確存有不適宜兒童及青少年接受的訊息,例如色情與暴力的訊息,兒童及青少年有可能因此受到心靈與肉體上的傷害。因此,為確保兒童及青少年使用網路的安全,並避免隱私權受到侵犯,家長(或監護人)應先檢閱各該網站是否有保護個人資料的「隱私權政策」,再決定是否同意提出相關的個人資料;並應持續叮嚀兒童及青少年不可洩漏自己或家人的任何資料(包括姓名、地址、電話、電子郵件信箱、照片、信用卡號等)給任何人。

為了維護 希平方學英文 網站安全,我們需要您的協助:

您承諾絕不為任何非法目的或以任何非法方式使用本服務,並承諾遵守中華民國相關法規及一切使用網際網路之國際慣例。您若係中華民國以外之使用者,並同意遵守所屬國家或地域之法令。您同意並保證不得利用本服務從事侵害他人權益或違法之行為,包括但不限於:
A. 侵害他人名譽、隱私權、營業秘密、商標權、著作權、專利權、其他智慧財產權及其他權利;
B. 違反依法律或契約所應負之保密義務;
C. 冒用他人名義使用本服務;
D. 上載、張貼、傳輸或散佈任何含有電腦病毒或任何對電腦軟、硬體產生中斷、破壞或限制功能之程式碼之資料;
E. 干擾或中斷本服務或伺服器或連結本服務之網路,或不遵守連結至本服務之相關需求、程序、政策或規則等,包括但不限於:使用任何設備、軟體或刻意規避看 希平方學英文 - 看 YouTube 學英文 之排除自動搜尋之標頭 (robot exclusion headers);

服務中斷或暫停
本公司將以合理之方式及技術,維護會員服務之正常運作,但有時仍會有無法預期的因素導致服務中斷或故障等現象,可能將造成您使用上的不便、資料喪失、錯誤、遭人篡改或其他經濟上損失等情形。建議您於使用本服務時宜自行採取防護措施。 希平方學英文 對於您因使用(或無法使用)本服務而造成的損害,除故意或重大過失外,不負任何賠償責任。

版權宣告
上次更新日期:2013-09-16

希平方學英文 內所有資料之著作權、所有權與智慧財產權,包括翻譯內容、程式與軟體均為 希平方學英文 所有,須經希平方學英文同意合法才得以使用。
希平方學英文歡迎你分享網站連結、單字、片語、佳句,使用時須標明出處,並遵守下列原則:

  • 禁止用於獲取個人或團體利益,或從事未經 希平方學英文 事前授權的商業行為
  • 禁止用於政黨或政治宣傳,或暗示有支持某位候選人
  • 禁止用於非希平方學英文認可的產品或政策建議
  • 禁止公佈或傳送任何誹謗、侮辱、具威脅性、攻擊性、不雅、猥褻、不實、色情、暴力、違反公共秩序或善良風俗或其他不法之文字、圖片或任何形式的檔案
  • 禁止侵害或毀損希平方學英文或他人名譽、隱私權、營業秘密、商標權、著作權、專利權、其他智慧財產權及其他權利、違反法律或契約所應付支保密義務
  • 嚴禁謊稱希平方學英文辦公室、職員、代理人或發言人的言論背書,或作為募款的用途

網站連結
歡迎您分享 希平方學英文 網站連結,與您的朋友一起學習英文。

抱歉傳送失敗!

不明原因問題造成傳送失敗,請儘速與我們聯繫!
希平方 x ICRT

「Damon Horowitz:我們需要一個道德操作系統」- We Need a "Moral Operating System"

觀看次數:1950  • 

框選或點兩下字幕可以直接查字典喔!

Power. That is the word that comes to mind. We're the new technologists. We have a lot of data, so we have a lot of power. How much power do we have? Scene from a movie: "Apocalypse Now"—great movie. We've got to get our hero, Captain Willard, to the mouth of the Nung River so he can go pursue Colonel Kurtz. The way we're going to do this is fly him in and drop him off. So the scene: the sky is filled with this fleet of helicopters carrying him in. And there's this loud, thrilling music in the background, this wild music.

That's a lot of power. That's the kind of power I feel in this room. That's the kind of power we have because of all of the data that we have.

Let's take an example. What can we do with just one person's data? What can we do with that guy's data? I can look at your financial records. I can tell if you pay your bills on time. I know if you're good to give a loan to. I can look at your medical records; I can see if your pump is still pumping—see if you're good to offer insurance to. I can look at your clicking patterns. When you come to my website, I actually know what you're going to do already because I've seen you visit millions of websites before. And I'm sorry to tell you, you're like a poker player, you have a tell. I can tell with data analysis what you're going to do before you even do it. I know what you like. I know who you are, and that's even before I look at your mail or your phone.

Those are the kinds of things we can do with the data that we have. But I'm not actually here to talk about what we can do. I'm here to talk about what we should do. What's the right thing to do?

Now I see some puzzled looks like, "Why are you asking us what's the right thing to do? We're just building this stuff. Somebody else is using it." Fair enough. But it brings me back. I think about World War II—some of our great technologists then, some of our great physicists, studying nuclear fission and fusion—just nuclear stuff. We gather together these physicists in Los Alamos to see what they'll build. We want the people building the technology thinking about what we should be doing with the technology.

So what should we be doing with that guy's data? Should we be collecting it, gathering it, so we can make his online experience better? So we can make money? So we can protect ourselves if he was up to no good? Or should we respect his privacy, protect his dignity and leave him alone? Which one is it? How should we figure it out?

I know: crowdsource. Let's crowdsource this. So to get people warmed up, let's start with an easy question—something I'm sure everybody here has an opinion about: iPhone versus Android. Let's do a show of hands—iPhone. Uh huh. Android. You'd think with a bunch of smart people we wouldn't be such suckers just for the pretty phones.

Next question, a little bit harder. Should we be collecting all of that guy's data to make his experiences better and to protect ourselves in case he's up to no good? Or should we leave him alone? Collect his data. Leave him alone. You're safe. It's fine.

Okay, last question—harder question—when trying to evaluate what we should do in this case, should we use a Kantian deontological moral framework, or should we use a Millian consequentialist one? Kant. Mill. Not as many votes. Yeah, that's a terrifying result. It's terrifying, because we have stronger opinions about our hand-held devices than about the moral framework we should use to guide our decisions.

How do we know what to do with all the power we have if we don't have a moral framework? We know more about mobile operating systems, but what we really need is a moral operating system. What's a moral operating system? We all know right and wrong, right? You feel good when you do something right, you feel bad when you do something wrong. Our parents teach us that: praise with the good, scold with the bad. But how do we figure out what's right and wrong? And from day to day, we have the techniques that we use. Maybe we just follow our gut. Maybe we take a vote—we crowdsource. Or maybe we punt—ask the legal department, see what they say. In other words, it's kind of random, kind of ad hoc, how we figure out what we should do. And maybe, if we want to be on surer footing, what we really want is a moral framework that will help guide us there, that will tell us what kinds of things are right and wrong in the first place, and how would we know in a given situation what to do.

So let's get a moral framework. We're numbers people, living by numbers. How can we use numbers as the basis for a moral framework? I know a guy who did exactly that. A brilliant guy—he's been dead 2,500 years. Plato, that's right. Remember him—old philosopher? You were sleeping during that class. And Plato, he had a lot of the same concerns that we did. He was worried about right and wrong. He wanted to know what is just, but he was worried that all we seem to be doing is trading opinions about this. He says something's just. She says something else is just. It's kind of convincing when he talks and when she talks too. I'm just going back and forth; I'm not getting anywhere. I don't want opinions; I want knowledge. I want to know the truth about justice—like we have truths in math. In math, we know the objective facts. Take a number, any number—two. Favorite number. I love that number. There are truths about two. If you've got two of something, you add two more, you get four. That's true no matter what thing you're talking about. It's an objective truth about the form of two, the abstract form. When you have two of anything—two eyes, two ears, two noses, just two protrusions—those all partake of the form of two. They all participate in the truths that two has. They all have two-ness in them. And therefore, it's not a matter of opinion.

What if, Plato thought, ethics was like math? What if there were a pure form of justice? What if there are truths about justice, and you could just look around in this world and see which things participated, partook of that form of justice? Then you would know what was really just and what wasn't. It wouldn't be a matter of just opinion or just appearances. That's a stunning vision. I mean, think about that. How grand. How ambitious. That's as ambitious as we are. He wants to solve ethics. He wants objective truths. If you think that way, you have a Platonist moral framework.

If you don't think that way, well, you have a lot of company in the history of Western philosophy, because the tidy idea, you know, people criticized it. Aristotle, in particular, he was not amused. He thought it was impractical. Aristotle said, "We should seek only so much precision in each subject as that subject allows." Aristotle thought ethics wasn't a lot like math. He thought ethics was a matter of making decisions in the here-and-now using our best judgment to find the right path. If you think that, Plato's not your guy. But don't give up. Maybe there's another way that we can use numbers as the basis of our moral framework.

How about this: What if in any situation you could just calculate, look at the choices, measure out which one's better and know what to do? That sound familiar? That's a utilitarian moral framework. John Stuart Mill was a great advocate of this—nice guy besides—and only been dead 200 years. So basis of utilitarianism—I'm sure you're familiar at least. The three people who voted for Mill before are familiar with this. But here's the way it works. What if morals, what if what makes something moral is just a matter of if it maximizes pleasure and minimizes pain? It does something intrinsic to the act. It's not like its relation to some abstract form. It's just a matter of the consequences. You just look at the consequences and see if, overall, it's for the good or for the worse. That would be simple. Then we know what to do.

Let's take an example. Suppose I go up and I say, "I'm going to take your phone." Not just because it rang earlier, but I'm going to take it because I made a little calculation. I thought, that guy looks suspicious. And what if he's been sending little messages to Bin Laden's hideout—or whoever took over after Bin Laden—and he's actually like a terrorist, a sleeper cell. I'm going to find that out, and when I find that out, I'm going to prevent a huge amount of damage that he could cause. That has a very high utility to prevent that damage. And compared to the little pain that it's going to cause (because it's going to be embarrassing when I'm looking on his phone and seeing that he has a Farmville problem and that whole bit) that's overwhelmed by the value of looking at the phone. If you feel that way, that's a utilitarian choice.

But maybe you don't feel that way either. Maybe you think, it's his phone. It's wrong to take his phone because he's a person and he has rights and he has dignity, and we can't just interfere with that. He has autonomy. It doesn't matter what the calculations are. There are things that are intrinsically wrong—like lying is wrong, like torturing innocent children is wrong. Kant was very good on this point, and he said it a little better than I'll say it. He said, "We should use our reason to figure out the rules by which we should guide our conduct, and then it is our duty to follow those rules. It's not a matter of calculation."

So let's stop. We're right in the thick of it, this philosophical thicket. And this goes on for thousands of years, because these are hard questions, and I've only got 15 minutes. So let's cut to the chase. How should we be making our decisions? Is it Plato, is it Aristotle, is it Kant, is it Mill? What should we be doing? What's the answer? What's the formula that we can use in any situation to determine what we should do, whether we should use that guy's data or not? What's the formula? There's not a formula. There's not a simple answer.

Ethics is hard. Ethics requires thinking. And that's uncomfortable. I know; I spent a lot of my career in artificial intelligence, trying to build machines that could do some of this thinking for us, that could give us answers. But they can't. You can't just take human thinking and put it into a machine. We're the ones who have to do it. Happily, we're not machines, and we can do it. Not only can we think, we must. Hannah Arendt said, "The sad truth is that most evil done in this world is not done by people who choose to be evil. It arises from not thinking." That's what she called the "banality of evil." And the response to that is that we demand the exercise of thinking from every sane person.

So let's do that. Let's think. In fact, let's start right now. Every person in this room do this: think of the last time you had a decision to make where you were worried to do the right thing, where you wondered, "What should I be doing?" Bring that to mind, and now reflect on that and say, "How did I come up that decision? What did I do? Did I follow my gut? Did I have somebody vote on it? Or did I punt to legal?" Or now we have a few more choices. "Did I evaluate what would be the highest pleasure like Mill would? Or like Kant, did I use reason to figure out what was intrinsically right?" Think about it. Really bring it to mind. This is important. It is so important we are going to spend 30 seconds of valuable TEDTalk time doing nothing but thinking about this. Are you ready? Go.

Stop. Good work. What you just did, that's the first step towards taking responsibility for what we should do with all of our power.

Now the next step—try this. Go find a friend and explain to them how you made that decision. Not right now. Wait till I finish talking. Do it over lunch. And don't just find another technologist friend; find somebody different than you. Find an artist or a writer—or, heaven forbid, find a philosopher and talk to them. In fact, find somebody from the humanities. Why? Because they think about problems differently than we do as technologists. Just a few days ago, right across the street from here, there was hundreds of people gathered together. It was technologists and humanists at that big BiblioTech Conference. And they gathered together because the technologists wanted to learn what it would be like to think from a humanities perspective. You have someone from Google talking to someone who does comparative literature. You're thinking about the relevance of 17th century French theater—how does that bear upon venture capital? Well, that's interesting. That's a different way of thinking. And when you think in that way, you become more sensitive to the human considerations, which are crucial to making ethical decisions.

So imagine that right now you went and you found your musician friend. And you're telling him what we're talking about, about our whole data revolution and all this—maybe even hum a few bars of our theme music. Well, your musician friend will stop you and say, "You know, the theme music for your data revolution, that's an opera. That's Wagner. It's based on Norse legend. It's Gods and mythical creatures fighting over magical jewelry." That's interesting. Now, it's also a beautiful opera, and we're moved by that opera. We're moved because it's about the battle between good and evil, about right and wrong. And we care about right and wrong. We care what happens in that opera. We care what happens in "Apocalypse Now." And we certainly care what happens with our technologies.

We have so much power today, it is up to us to figure out what to do, and that's the good news. We're the ones writing this opera. This is our movie. We figure out what will happen with this technology. We determine how this will all end.

Thank you.

播放本句

登入使用學習功能

使用Email登入

HOPE English 播放器使用小提示

  • 功能簡介

    單句重覆、重複上一句、重複下一句:以句子為單位重覆播放,單句重覆鍵顯示綠色時為重覆播放狀態;顯示白色時為正常播放狀態。按重複上一句、重複下一句時就會自動重覆播放該句。
    收錄佳句:點擊可增減想收藏的句子。

    中、英文字幕開關:中、英文字幕按鍵為綠色為開啟,灰色為關閉。鼓勵大家搞懂每一句的內容以後,關上字幕聽聽看,會發現自己好像在聽中文說故事一樣,會很有成就感喔!
    收錄單字:框選英文單字可以收藏不會的單字。
  • 分享
    如果您有收錄很優秀的句子時,可以分享佳句給大家,一同看佳句學英文!