请选择 进入手机版 | 继续访问电脑版
搜索
查看: 317|回复: 1

【分享】UIUC并行计算、异构加速设计研究报告

[复制链接]
发表于 2017-1-16 11:13 | 显示全部楼层 |阅读模式
For many decades, Moore’s law has bestowed a wealth of transistors that hardware designers and compiler writers have converted to usable performance, without changing the sequential programming interface. The main techniques for these performance benefits—increased clock frequency and smarter but increasingly complex architectures—are now hitting the so-called power wall. The computer industry has accepted that
future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than making a single core go faster. This historic shift to multicore processors changes the programming interface by exposing parallelism to the programmer, after decades of sequential computing.

Parallelism has been successfully used in many domains such as high performance computing (HPC), servers, graphics accelerators, and many embedded systems. The multicore inflection point, however, affects the entire market, particularly the client space, where parallelism has not been previously widespread. Programs with millions of lines of code must be converted or rewritten to take advantage of parallelism; yet, as practiced
today, parallel programming for the client is a difficult task performed by few programmers. Commonly used programming models are prone to subtle, hard to reproduce bugs, and parallel programs are notoriously hard to test due to data races, non-deterministic interleavings, and complex memory models. Mapping a parallel application to parallel hardware is also difficult given the large number of degrees of freedom (how many cores to use, whether to use special instructions or accelerators, etc.), and traditional parallel environments have done a poor job virtualizing the hardware for the programmer. As a result, only the highest performance seeking and skilled programmers have been exposed to parallel computing, resulting in little investment in development environments and a lack of trained manpower. There is a risk that while hardware races ahead to ever-larger numbers of cores, software will lag behind and few applications will leverage the potential hardware performance.

Moving forward, if every computer will be a parallel computer, most programs must execute in parallel and most programming teams must be able to develop parallel programs, a daunting goal given the above problems. Illinois has a rich history in parallel computing starting from the genesis of the field and continues a broad research program in parallel computing today [1]. This program includes the Universal Parallel Computing Research Center (UPCRC), established at Illinois by Intel and Microsoft, together with a sibling center established at Berkeley. These two centers are focused on the problems of multicore computing, especially in the client and mobile domains.

This paper describes the research vision and agenda for client and mobile computing research at Illinois, focusing on the activities at UPCRC (some of which preceded UPCRC).

Given the long history of parallel computing, it is natural to ask whether the challenges we face today differ from those of the past. Compared to the HPC and server markets, the traditional focus of parallel computing research, the client market brings new difficulties, but it also brings opportunities. Table 1 summarizes some of the key differences.

UPCRC_Whitepaper.pdf (1.73 MB, 下载次数: 0)
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

站长推荐上一条 /1 下一条

facebook google plus twitter linkedin youku weibo rss
©2019 Microchip Corporation

小黑屋|手机版|Archiver|Tensilica技术社区

GMT+8, 2019-9-19 15:11 , Processed in 0.134267 second(s), 10 queries , MemCache On.

快速回复 返回顶部 返回列表