分享幾個發瘋學習法,離譜但有用!

全文共3276字,預計閱讀時間 9 分鐘



     


✔️陰陽怪氣學習法


桌子上貼一張大大的紙條“玩會兒吧姐,別真考上了”,這對於那些心理自尊心極強的姐們兒真的很受用!



等你學累了想玩的時候,看到這句話瞬間“反骨體質”上身,我還就偏要學!!



✔️公主學習法


把自己想象成公主,以公主的身份,開啓一天的工作學習。早起雖然還是很困,但是你要告訴自己:公主是不可以賴牀的!


走到學習書桌前,宮廷風檯燈,書桌上擺放復古小擺件⋯學習的儀式感一整個拉滿了!越學越嗨,越有成就感!!





✔️吸麪包學習法


把香噴噴的吐司麪包用保鮮膜包起來,學習累的時候猛吸一口,吐司的奶香就會撲鼻而來,振奮人心!


適用於沒有貓貓狗狗可以吸,不抽菸不喝酒的考研黨~



除此之外,你有沒有什麼獨特但有效的學習方法呢?評論區等你!


離譜但有用的學習方法




外刊精讀:Day 48

01


英文原文

Anyone seduced by A.I.-powered chatbots like ChatGPT and Bard — wow, they can write essays and recipes! — eventually runs into what are known as hallucinations, the tendency for artificial intelligence to fabricate information.


The chatbots, which guess what to say based on information obtained from all over the internet, can’t help but get things wrong. And when they fail — by publishing a cake recipe with wildly inaccurate flour measurements, for instance — it can be a real buzzkill.


Yet as mainstream tech tools continue to integrate A.I., it’s crucial to get a handle on how to use it to serve us. After testing dozens of A.I. products over the last two months, I concluded that most of us are using the technology in a suboptimal way, largely because the tech companies gave us poor directions.


The chatbots are the least beneficial when we ask them questions and then hope whatever answers they come up with on their own are true, which is how they were designed to be used. But when directed to use information from trusted sources, such as credible websites and research papers, A.I. can carry out helpful tasks with a high degree of accuracy.





02


生詞卡

(1) chatbot [ˈtʃæt.bɒt] n. 聊天機器人
(2) seduce [sɪˈdjuːs] v. 誘惑
(3) hallucination [həˌluːsɪˈneɪʃn] n. 幻覺
(4) tendency [ˈtendənsi] n. 趨勢,趨向
(5) fabricate [ˈfæbrɪkeɪt] v. 編造,捏造
(6) recipe [ˈresəpi] n. 食譜
(7) inaccurate [ɪnˈækjərət] adj. 不準確的
(8) buzzkill [ˈbʌzˌkɪlə] n. 掃興的人或物
(9) mainstream [ˈmeɪnstriːm] n. 主流思想,主流羣體
(10) integrate [ˈɪntɪɡreɪt] v. (使)合併,成爲一體
(11) suboptimal [ˌsʌbˈɑːptɪməl] adj. 低於最高標準的
(12) credible [ˈkredəbl] adj. 可信的,可靠的
(13) accuracy [ˈækjərəsi] n. 精確(性),準確(性)





03


參考譯文

Anyone seduced by A.I.-powered chatbots like ChatGPT and Bard — wow, they can write essays and recipes! — eventually runs into what are known as hallucinations, the tendency for artificial intelligence to fabricate information.

凡是被ChatGPT和Bard等人工智能聊天機器人吸引的人會覺得,哇,它們還能寫文章和食譜!但最終都會遭遇所謂的幻滅,他們會發現人工智能有捏造信息的傾向。


The chatbots, which guess what to say based on information obtained from all over the internet, can’t help but get things wrong. And when they fail — by publishing a cake recipe with wildly inaccurate flour measurements, for instance — it can be a real buzzkill.

聊天機器人會根據從互聯網上獲取的信息猜測該說什麼,但難免出錯。當它們失敗時——比如發佈一份麪粉用量非常不準確的蛋糕食譜——就會讓人掃興。


Yet as mainstream tech tools continue to integrate A.I., it’s crucial to get a handle on how to use it to serve us. After testing dozens of A.I. products over the last two months, I concluded that most of us are using the technology in a suboptimal way, largely because the tech companies gave us poor directions.

然而,隨着主流技術工具繼續整合人工智能,掌握如何利用它爲我們服務是至關重要的。在過去兩個月裏,我測試了數十種人工智能產品,我得出的結論是,我們大多數人都在以一種不理想的方式使用這項技術,主要是因爲科技公司給我們提供的指導不夠好。


The chatbots are the least beneficial when we ask them questions and then hope whatever answers they come up with on their own are true, which is how they were designed to be used. But when directed to use information from trusted sources, such as credible websites and research papers, A.I. can carry out helpful tasks with a high degree of accuracy.

我們向聊天機器人提問,然後希望它們給出正確答案——這就是它們被設計用來使用的方式,然而在這種情況下,它們起到的作用最小。但是,如果引導人工智能使用來自可信來源的信息,例如可信的網站和研究論文,人工智能就能以高度準確的方式很好地完成任務。



昨天好幾位同學擬定的題目都很不錯,有輸出是不是感覺很不一樣呢?


外刊Day47題目:

【Russia’s Lunar Lander Crashes Into the Moon】

俄羅斯月球着陸器在月球墜毀


今天的文章,你覺得起什麼題目比較好呢?



--THE END--




- 還有你錯過的 往期打卡-


突發!新生入學獎學金取消?!

D45 | 再不學要被超過啦!

學不進的考研人請進!!

決定考研後,我愛上了偷偷摸摸的感覺...




點贊+在看,考研穩上岸