Resource Allocation for Mobile Edge Computing Platform with Multiple Resource Providers

PI: Ai-Chun Pang (National Taiwan University), CoPI: Yuan-Yao Shih (Academia Sinica)

Status Quo: 

Recently, with the ubiquitously connected smart devices, the Internet of Things (IoT) has received tremendous attentions and is considered as a promising architecture for many applications.

With the diversity of the IoT applications, such as wearable computing, smart metering, smart home/city, vehicles and health monitoring, a large amount of dense, distributed, and mostly mobile IoT devices are expected for deployment shortly. In addition, many applications (such as augmented/virtual reality and vehicle automation) are demanding in terms of high bandwidth and low latency. 

These applications need intensive computations to accomplish object tracking, content analytics and intelligent decision for better accuracy, performance and user experiences. Current networking infrastructure, including radio access and backhaul, encounter difficulties in dealing with the increasing IoT traffics; thus, to fulfill the service requirements of those IoT applications, cloud computing is considered as a promising architecture, which can provide elastic resources to applications on the resource-limited IoT devices. 

However, many challenges remain unsolved, such as mobility support, location-awareness and ultra low-latency requirements due to possible long network delay in traversing the time-sensitive data traffics through the Internet backbone.

Key New Insights:

–     A new paradigm, called fog/edge computing, is emerging. It is an architecture by extending cloud computing to        the edge of the network.

–     Fog/edge computing has the potential to fulfill the ultra-low latency requirements for new rising machine-type            communication (MTC) applications (such as tactile Internet, mobile augmented reality, and vehicle automation)        by joint powerful computing of multiple fog/edge nodes and near-range communications at the edge.

–     Since the target services require ultra-low latency, in addition to computing latency, communication latency              also cannot be neglected. The location of fog/edge nodes and users be considered as a major factor when              deciding which nodes supply resource to which users.

–     We discover that more cooperative fog/edge nodes provide higher computing power and hence reduce total              computing latency; however, each cooperative fog/edge node obtains fewer radio resources from the master            fog/edge node and as a result total communication latency will increase. Thus, a new type of cost                          (communication/computing)-performance tradeoff where the temporal equivalency of the two physically                    different resources needs to be built.

 (Updated in Jul, 2017)