Cloud Computing is an emerging idea and technology with pros and cons, but the cloud innovation definitely will leave its impact and footprints while facing new economic realities during the second decade of a new century. Rather than installing a series of commercial packages for each computer, including never ending security patches, users would only have to load one application. That application would allow workers to log on to a Web-based service which hosts all the programs the user would need for their job. Remote servers owned by the service provider would run everything from e-mail to word processing to complex data analysis programs. It's called cloud computing, the fifth utility (after electric power, gas, water and telephony) and it could change the way individuals and companies operate. However, as often apparent from the news media describing outages as simple glitches (usually downplayed by the cloud hosting companies and their providers or assigned responsible managers who boast about their 99.99% reliability), the crucial problem with cloud computing is its occasional, though dramatic lack of desired reliability and security. Both of these key features need to be duly and timely assessed in order to manage this new model of distributed computing, i.e. cloud. This workshop will examine methods and software programs that achieve these challenging goals, i.e. assessment and management hurdles from the cloud hosting (producer’s risk) perspective in addition to the customer (consumer’s risk) base, an avenue which has been examined before by the author. The purpose is to prioritize and cost-optimize the countermeasures needed to reach a desirable level of customer satisfaction as well as cloud hosting best practices. Quantitative methods of statistical inference on the Quality of Service (QoS) or conversely, Loss of Service (LoS), as commonly used customer satisfaction metrics of system reliability and security performance is reviewed. Subsequently, as an analytical alternative to the simulation practices, a cloud Risk-O-Meter approach is studied to assess risk and manage it cost optimally through an information gathering data-base type algorithm. The primary goal of those methods is to optimize plans to improve the quality of a cloud operation and what countermeasures to take. Among the simulation alternatives, a discrete event simulation (DES) is reviewed to estimate the risk indices in a large cloud computing environment to compare with the intractable and lengthy theoretical Markov solutions.
Dr. M. Sahinoglu (Auburn University, Montgomery)