SAT阅读课外扩展材料

雕龙文库 分享 时间: 收藏本文

SAT阅读课外扩展材料

  Since the start of the year, a team of researchers at Carnegie Mellon University supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer calculating 24 hours a day, seven days a week that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.

  For all the advances in computer science, we still dont have a computer that can learn as humans do, cumulatively, over the long term, said the teams leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.

  The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like San Francisco is a city and sunflower is a plant.

  NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player . The Indianapolis Colts is a football team . By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts even if it has never read that Mr. Manning plays for the Colts. Plays for is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.

  The learned facts are continuously added to NELLs growing database, which the researchers call a knowledge base. A larger pool of facts, Dr. Mitchell says, will help refine NELLs learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.

  NELL is one project in a widening field of research and investment aimed at enabling computers to better understand the meaning of language. Many of these efforts tap the Web as a rich trove of text to assemble structured ontologies formal descriptions of concepts and relationships to help computers mimic human understanding. The ideal has been discussed for years, and more than a decade ago Sir Tim Berners-Lee, who invented the underlying software for the World Wide Web, sketched his vision of a semantic Web.

  Today, ever-faster computers, an explosion of Web data and improved software techniques are opening the door to rapid progress. Scientists at universities, government labs, Google, Microsoft, I.B.M. and elsewhere are pursuing breakthroughs, along somewhat different paths.

  For example, I.B.M.s question answering machine, Watson, shows remarkable semantic understanding in fields like history, literature and sports as it plays the quiz show Jeopardy! Google Squared, a research project at the Internet search giant, demonstrates ample grasp of semantic categories as it finds and presents information from around the Web on search topics like U.S. presidents and cheeses.

  Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, while NELL is highly automated. Whats exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help, said Oren Etzioni, a computer scientist at the University of Washington, who leads a project called TextRunner, which reads the Web to extract facts.

  Computers that understand language, experts say, promise a big payoff someday. The potential applications range from smarter search to virtual personal assistants that can reply to questions in specific disciplines or activities like health, education, travel and shopping.

  The technology is really maturing, and will increasingly be used to gain understanding, said Alfred Spector, vice president of research for Google. Were on the verge now in this semantic world.

  With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: Anger is an emotion. Bliss is an emotion. And about a dozen more.

  

  Since the start of the year, a team of researchers at Carnegie Mellon University supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer calculating 24 hours a day, seven days a week that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.

  For all the advances in computer science, we still dont have a computer that can learn as humans do, cumulatively, over the long term, said the teams leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.

  The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like San Francisco is a city and sunflower is a plant.

  NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player . The Indianapolis Colts is a football team . By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts even if it has never read that Mr. Manning plays for the Colts. Plays for is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.

  The learned facts are continuously added to NELLs growing database, which the researchers call a knowledge base. A larger pool of facts, Dr. Mitchell says, will help refine NELLs learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.

  NELL is one project in a widening field of research and investment aimed at enabling computers to better understand the meaning of language. Many of these efforts tap the Web as a rich trove of text to assemble structured ontologies formal descriptions of concepts and relationships to help computers mimic human understanding. The ideal has been discussed for years, and more than a decade ago Sir Tim Berners-Lee, who invented the underlying software for the World Wide Web, sketched his vision of a semantic Web.

  Today, ever-faster computers, an explosion of Web data and improved software techniques are opening the door to rapid progress. Scientists at universities, government labs, Google, Microsoft, I.B.M. and elsewhere are pursuing breakthroughs, along somewhat different paths.

  For example, I.B.M.s question answering machine, Watson, shows remarkable semantic understanding in fields like history, literature and sports as it plays the quiz show Jeopardy! Google Squared, a research project at the Internet search giant, demonstrates ample grasp of semantic categories as it finds and presents information from around the Web on search topics like U.S. presidents and cheeses.

  Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, while NELL is highly automated. Whats exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help, said Oren Etzioni, a computer scientist at the University of Washington, who leads a project called TextRunner, which reads the Web to extract facts.

  Computers that understand language, experts say, promise a big payoff someday. The potential applications range from smarter search to virtual personal assistants that can reply to questions in specific disciplines or activities like health, education, travel and shopping.

  The technology is really maturing, and will increasingly be used to gain understanding, said Alfred Spector, vice president of research for Google. Were on the verge now in this semantic world.

  With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: Anger is an emotion. Bliss is an emotion. And about a dozen more.

  

信息流广告 网络推广 周易 易经 代理招生 二手车 网络营销 招生代理 旅游攻略 非物质文化遗产 查字典 精雕图 戏曲下载 抖音代运营 易学网 互联网资讯 成语 成语故事 诗词 工商注册 注册公司 抖音带货 云南旅游网 网络游戏 代理记账 短视频运营 在线题库 国学网 知识产权 抖音运营 雕龙客 雕塑 奇石 散文 自学教程 常用文书 河北生活网 好书推荐 游戏攻略 心理测试 石家庄人才网 考研真题 汉语知识 心理咨询 手游安卓版下载 兴趣爱好 网络知识 十大品牌排行榜 商标交易 单机游戏下载 短视频代运营 宝宝起名 范文网 电商设计 免费发布信息 服装服饰 律师咨询 搜救犬 Chat GPT中文版 经典范文 优质范文 工作总结 二手车估价 实用范文 爱采购代运营 古诗词 衡水人才网 石家庄点痣 养花 名酒回收 石家庄代理记账 女士发型 搜搜作文 石家庄人才网 铜雕 词典 围棋 chatGPT 读后感 玄机派 企业服务 法律咨询 chatGPT国内版 chatGPT官网 励志名言 河北代理记账公司 文玩 朋友圈文案 语料库 游戏推荐 男士发型 高考作文 PS修图 儿童文学 买车咨询 工作计划 礼品厂 舟舟培训 IT教程 手机游戏推荐排行榜 暖通,电采暖, 女性健康 苗木供应 主题模板 短视频培训 优秀个人博客 包装网 创业赚钱 养生 民间借贷律师 绿色软件 安卓手机游戏 手机软件下载 手机游戏下载 单机游戏大全 免费软件下载 网赚 手游下载 游戏盒子 职业培训 资格考试 成语大全 英语培训 艺术培训 少儿培训 苗木网 雕塑网 好玩的手机游戏推荐 汉语词典 中国机械网 美文欣赏 红楼梦 道德经 网站转让 鲜花 社区团购 社区电商