用户名:  密码:   
网站首页即时通讯活动公告最新消息科技前沿学人动向两岸三地人在海外历届活动关于我们联系我们申请加入
栏目导航 — 美国华裔教授专家网科技动向科技前沿
关键字  范围   
 
MIT:基于光的计算机系统可以跳到ChatGpt型机器学习程序的力量
2023/7/29 3:28:05 | 浏览:1278 | 评论:0

Elizabeth A. Thomson | Materials Research Laboratory

MIT:基于光的计算机系统可以跳到ChatGpt型机器学习程序的力量

Artist's rendition of a computer system based on light that could jumpstart the power of machine-learning programs like ChatGPT. Blue sections represent the micron-scale lasers key to the technology.

Credit: Ella Maru Studio

ChatGPT has made headlines around the world with its ability to write well-done essays, emails, and computer code based on a few questions from a user. Now an MIT-led team reports a system that could lead to machine-learning programs several orders of magnitude more powerful than the one behind ChatGPT. Plus, the system they developed could use several orders of magnitude less energy than the state-of-the-art supercomputers behind the machine-learning models of today.

In the July 17 issue of Nature Photonics, the researchers report the first experimental demonstration of the new system, which does its computations based on the movement of light rather than electrons using hundreds of micron-scale lasers. With the new system, the team reports a greater than 100 fold improvement in energy efficiency and a 25 fold improvement in compute density, a measure of the power of a system, over state-of-the-art digital computers for machine learning. 

Toward the Future

In the paper, the team also cites “substantially several more orders of magnitude for future improvement.” As a result, the authors continue, the technique “opens an avenue to large-scale optoelectronic processors to accelerate machine-learning tasks from data centers to decentralized edge devices.” In other words, cell phones and other small devices could become capable of running programs that can currently only be computed at large data centers.

Further, because the components of the system can be created using fabrication processes already in use today, “we expect that it could be scaled for commercial use in a few years. For example, the laser arrays involved are widely used in cell-phone face ID and data communication,” says Zaijun Chen, first author, who conducted the work while a postdoctoral associate at MIT in the Research Laboratory of Electronics and is now an assistant professor at the University of Southern California.

Says Dirk Englund, an associate professor in MIT’s Department of Electrical Engineering and Computer Science(EECS)and leader of the work, “ChatGPT is limited in its size by the power of today’s supercomputers. It’s just not economically viable to train models that are much bigger. Our new technology could make it possible to leapfrog to machine-learning models that otherwise would not be reachable in the near future.”

He continues, “We don’t know what capabilities the next-generation ChatGPT will have if it is 100 times more powerful, but that’s the regime of discovery that this kind of technology can allow.” Englund is also leader of MIT’s Quantum Photonics Laboratory and is affiliated with the Research Laboratory of Electronics(RLE)and the Materials Research Laboratory.

A Drumbeat of Progress

The current work is the latest achievement in a drumbeat of progress over the last few years by Englund and many of the same colleagues. For example, in 2019 an Englund team reported the theoretical work that led to the current demonstration. The first author of that paper, Ryan Hamerly, now of RLE and NTT Research Inc, is also an author of the current paper.

Additional coauthors of the current Nature Photonics paper are Alexander Sludds, Ronald Davis, Ian Christen, Liane Bernstein, and Lamia Ateshian, all of RLE; and Tobias Heuser, Niels Heermeier, James A. Lott, and Stephan Reitzensttein of Technische Universitat Berlin.

Deep neural networks(DNNs)like the one behind ChatGPT are based on huge machine-learning models that simulate how the brain processes information. However, the digital technologies behind today’s DNNs are reaching their limits even as the field of machine learning is growing. Further, they require huge amounts of energy and are largely confined to large data centers. That is motivating the development of new computing paradigms.

The Advantages of Light

Using light rather than electrons to run DNN computations has the potential to break through the current bottlenecks. Computations using optics, for example, have the potential to use far less energy than those based on electronics. Further, with optics, “you can have much larger bandwidths,” or compute densities, says Chen. Light can transfer much more information over a much smaller area.

But current optical neural networks(ONNs)have significant challenges. For example, they use a great deal of energy because they are inefficient at converting incoming data based on electrical energy into light. Further, the components involved are bulky and take up significant space. And while ONNs are quite good at linear calculations like adding, they are not great at nonlinear calculations like multiplication and “if” statements.

In the current work the researchers introduce a compact architecture that, for the first time, solves all of these challenges and two more simultaneously. That architecture is based on state-of-the-art arrays of vertical surface-emitting lasers(VCSELs), a relatively new technology used in applications including LiDAR remote sensing and laser printing. The particular VCELs reported in the Nature Photonics paper were developed by the Reitzenstein group at Technische Universitat Berlin. “This was a collaborative project that would not have been possible without them,” Hamerly says.

Logan Wright is an assistant professor at Yale University who was not involved in the current research. Comments Wright, “The work by Zaijun Chen et al. is inspiring, encouraging me and likely many other researchers in this area that systems based on modulated VCSEL arrays could be a viable route to large-scale, high-speed optical neural networks. Of course, the state-of-the-art here is still far from the scale and cost that would be necessary for practically useful devices, but I am optimistic about what can be realized in the next few years, especially given the potential these systems have to accelerate the very large-scale, very expensive AI systems like those used in popular textual ‘GPT’ systems like ChatGPT.”

Chen, Hamerly, and Englund have filed for a patent on the work, which was sponsored by the Army Research Office, NTT Research, the National Defense Science and Engineering Graduate Fellowship Program, the National Science Foundation, the Natural Sciences and Engineering Research Council of Canada, and the Volkswagen Foundation.

相关栏目:『科技前沿
电子皮肤-具有高拉伸性且可定制化的微针电极阵列 2024-05-04 [8]
跨越300多年的接力:受陶哲轩启发,数学家决定用AI形式化费马大定理的证明 2024-05-04 [9]
全新神经网络架构KAN一夜爆火!200参数顶30万,MIT华人一作,轻松复现Nature封面AI数学研究 2024-05-04 [9]
历史性的飞行:美空军部长乘人工智能战斗机起飞体验空战 2024-05-03 [12]
神秘大模型一夜刷屏,能力太强被疑GPT-4.5,奥特曼避而不答打哑谜 2024-05-02 [20]
《Science》封面挑战现代教科书,一个长达数十年的谜团被解开了 2024-05-02 [26]
数不清的蜘蛛!欧空局探测器在火星“古城废墟”中拍到惊悚照片 2024-05-02 [25]
根据线粒体基因进行过滤 2024-05-02 [20]
把10万块AI 芯片部署在同一地区电网就会崩溃 2024-04-29 [125]
十个伟大的物理实验,你知道多少个? 2024-04-28 [82]
相关栏目更多文章
最新图文:
:美国加大审查范围 北大多名美国留学生遭联邦调查局质询 :天安门广场喜迎“十一”花团锦簇的美丽景象 马亮:做院长就能够发更多论文?论文发表是不是一场“权力的游戏”? :印裔人才在美碾压华裔:我们可以从印度教育中学到什么? :北京452万人将从北京迁至雄安(附部分央企名单) :《2019全球肿瘤趋势报告》 :阿尔茨海默病预防与干预核心讯息图解 :引力波天文台或有助搜寻暗物质粒子
更多最新图文
更多《即时通讯》>>
 
打印本文章
 
您的名字:
电子邮件:
留言内容:
注意: 留言内容不要超过4000字,否则会被截断。
未 审 核:  是
  
关于我们联系我们申请加入后台管理设为主页加入收藏
美国华裔教授专家网版权所有,谢绝拷贝。如欲选登或发表,请与美国华裔教授专家网联系。
Copyright © 2024 ScholarsUpdate.com. All Rights Reserved.