Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Long short term memory based Web application firewall to detect, Study Guides, Projects, Research of Quantum Physics

With the emerging new technologies, security is a challenging part that has bigger concerns with the increasing cyber threat in the modern world of computing technologies. New techniques and tactics are being used to gain unauthorized access to the web and harm, steal, and destroy the information. Protecting the system from many threats like DDoS, SQL injection, cross-site scripting, etc. is very challenging. This thesis work makes a comparative analysis between normal HTTP traffic and attack tr

Typology: Study Guides, Projects, Research

2018/2019

Uploaded on 12/21/2022

KaulReeya
KaulReeya 🇳🇿

3 documents

1 / 56

Toggle sidebar

Related documents


Partial preview of the text

Download Long short term memory based Web application firewall to detect and more Study Guides, Projects, Research Quantum Physics in PDF only on Docsity! TABLE OF CONTENTS COPYRIGHT iii DECLARATION iv RECOMMENDATION v DEPARTMENTAL ACCEPTANCE vi ACKNOWLEDGEMENT I ABSTRACT II TABLE OF CONTENTS III LIST OF FIGURES VI LIST OF TABLES VIII LIST OF ABBREVIATIONS IX 1 INTRODUCTION 1 1.1 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . 1 1.1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1.2 Real-time Systems . . . . . . . . . . . . . . . . . . . . . . . 2 1.1.3 Multiprocessor Platforms . . . . . . . . . . . . . . . . . . . . 2 1.1.4 Task and Platform Model . . . . . . . . . . . . . . . . . . . 3 1.1.5 Deadline in Real-time Systems . . . . . . . . . . . . . . . . . 5 1.1.6 Optimal Scheduling and Clairvoyance . . . . . . . . . . . . . 5 1.1.7 Quantum Annealing . . . . . . . . . . . . . . . . . . . . . . 6 1.1.8 Totally Unimodular Matrix . . . . . . . . . . . . . . . . . . 8 1.2 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.3 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.4 Scope of the Work . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 III 1.5 Originality of the Work . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.6 Organisation of the Work . . . . . . . . . . . . . . . . . . . . . . . . 10 2 LITERATURE REVIEW 11 3 METHODOLOGY 14 3.1 Block Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.1.1 Using Quantum Annealing . . . . . . . . . . . . . . . . . . . 14 3.1.2 Using Linear Programming Relaxations . . . . . . . . . . . . 14 3.2 Quantum Annealing Methodology . . . . . . . . . . . . . . . . . . . 15 3.2.1 Task System Generation . . . . . . . . . . . . . . . . . . . . 15 3.2.2 Scheduling Problem Formulation . . . . . . . . . . . . . . . 16 3.2.3 Reduction to the Constrained Quadratic Model . . . . . . . 17 3.2.4 Solution of CQM via Quantum Annealing . . . . . . . . . . 18 3.2.5 Design of Multiprocessor RTOS Simulation Kernel . . . . . . 18 3.2.6 Task Allocation to Processors . . . . . . . . . . . . . . . . . 19 3.2.7 Optimal Uniprocessor Scheduling . . . . . . . . . . . . . . . 19 3.2.8 Job Generation . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.3 Linear Programming Relaxation Methodology . . . . . . . . . . . . 21 3.3.1 Linear Programming Relaxations to the Scheduling Problem 21 3.3.2 Solution of Linear Program . . . . . . . . . . . . . . . . . . 23 3.3.3 Feasibility Testing . . . . . . . . . . . . . . . . . . . . . . . 23 4 RESULTS AND ANALYSIS 24 4.1 Intermediate Results . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.1.1 Uniprocessor Scheduling Results . . . . . . . . . . . . . . . . 24 4.2 Final Results and Analysis . . . . . . . . . . . . . . . . . . . . . . . 26 4.2.1 Multiprocessor Scheduling Results . . . . . . . . . . . . . . . 26 IV 4.11 Problem Size vs LP Feasibility Ratio at various values of Problem Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.12 Problem Size vs LP Feasibility Ratio at various values of Scale of Exponential Distribution . . . . . . . . . . . . . . . . . . . . . . . . 39 4.13 Problem Size vs LP Feasibility Ratio at various values of Deadline Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.14 High-level view of the Overall Quantum Annealing Process . . . . 41 4.15 Problem size vs Runtime for Branch-and-Bound and Quantum Annealing algorithms (left) and the LP relaxation method (right) . 42 VII LIST OF TABLES 3.1 A Sample Implicit-deadline Task System . . . . . . . . . . . . . . . 15 4.1 Parameters for the Three-task Tasksystem . . . . . . . . . . . . . . 24 4.2 Parameters for the Ten-task Tasksystem . . . . . . . . . . . . . . . 25 4.3 Parameters for the Seven-task Multiprocessor Task System . . . . . 26 4.4 Values of System Parameters used for the Analysis . . . . . . . . . 41 VIII LIST OF ABBREVIATIONS CMOS : Complementary Metal-Oxide Semiconductor P : Deterministic Polynomial-time complexity class NP : Non-deterministic Polynomial-time complexity class CQM : Constrained Quadratic Model QUBO : Quadratic Unconstrained Binary Optimization WCET : Worst-Case Execution Time L&L : Liu and Layland EDF : Earliest Deadline First RM : Rate Monotonic DAG : Directed Acyclic Graph MPSoC : Multi-Processor System-on-Chip QAOA : Quantum Approximate Optimization Algorithm VQE : Variational Quantum Eigensolver IP : Integer Program(ming) ILP : Integer Linear Program(ming) LP : Linear Program(ming) KKT : Karush-Kuhn-Tucker BQP : Bounded-error Quantum Polynomial-time PH : Polynomial Heirarchy IX Identical platform A multiprocessor platform P = {pi : i ∈ {1, 2...,m}} is said to be identical if given a universal task set U (comprising of all possible tasks that can be possibly executed on the platform) ∀pi, pj ∈ P ∧ τ ∈ U, T (τ, pi) = T (τ, pj), where T : U×P −→ Z+ gives the execution time of a job produced by a task τ ∈ U on processor pi ∈ P . Uniform platform A multiprocessor platform P = {pi : i ∈ {1, 2...,m}} is said to be uniform if for each processor there is an associated set of computing capacities S = {si : i ∈ {1, 2, ...,m}} in the sense that if a job executes on processor pi for t units of time, then si × t units of execution will have completed. Unrelated platform A multiprocessor platform P = {pi : i ∈ {1, 2...,m}} is said to be unrelated if for each job j produced by a task, there is an associated set of execution rates on the processors. i.e. a function R : J × P −→ Z+ can be defined which gives the execution rate of a job j on processor p in the sense that if r = R(j, pi) and if j executes for t units of time on processor pi, r × t units of execution will have completed. 1.1.4 Task and Platform Model Definitions a) Task: A task is an abstraction to a set of instructions that can be executed by a processing platform. b) Job: A job is an execution of a certain task upon a certain processing platform. Since a task can be executed multiple times on a processing platform, it can 3 be understood as a set of jobs. c) Recurrent Task: A task is said to be recurrent if it releases multiple jobs at multiple instants in time. d) Periodic Task: A recurrent task is said to be periodic if it releases jobs periodically (at constant time intervals). e) Sporadic Task: A recurrent task is said to be sporadic is if the difference between successive release times of corresponding jobs have a lower bound but no upper bound. f) Task System: A task system is a countable set of tasks all of which have a pre-specified set of common properties. The Model The model studied in this work has been considered in [4] and is a generalization of the three-parameter model [11]. In this work, a task system Γ = {τ1, τ2, ..., τn} of n recurrent tasks has been considered on an unrelated multiprocessing platform P = {p1, p2, ..., pm} consisting of m processors. Furthermore, the Worst-case Execution Time (WCET) of a task τi is given by a m-tuple (Ci1, Ci2, ..., Cim) where Cij is the maximum time that task τi takes to complete its execution on processor pj. A task τi can release a countably infinite sequence of jobs. Each job released by τi at time t must complete its execution before or at time t+Di, where Di is called the relative deadline of task τi. Furthermore, a parameter Ti is defined for each task τi such that if τi has released a job at time instant t, it cannot release another job before t+ Ti. Ti is hence called the period of the task [4]. 4 1.1.5 Deadline in Real-time Systems A real time system can be categorized into three categories based upon the relation between the deadline of a task and its period. The categories are discussed below. Implicit-deadline Systems A real-time task system (whether periodic or sporadic) is called an implicit deadline system if ∀τi ∈ Γ, Di = Ti. In this case, the three parameter model [11] reduces to the Liu and Layland model [12]. Constrained-deadline Systems A real-time task system is called a constrained-deadline system if ∀τi ∈ Γ, Di ≤ Ti. Arbitrary-deadline Systems A real-time task system is called an arbitrary-deadline system if ∀τi ∈ Γ, Di and Ti are independent. 1.1.6 Optimal Scheduling and Clairvoyance Clairvoyant Scheduler A scheduler S is said to be clairvoyant if it knows everything about the future, especially about the execution times and arrival times of all the jobs to be executed on the platform. Optimal Scheduler A scheduler S is said to be optimal if it produces the same schedule on the same scheduling problem as the best clairvoyant scheduler would. 5 energy minimization in the Ising model, and hence can be solved directly by a quantum computer. 1.1.8 Totally Unimodular Matrix A matrix is totally unimodular if and only if every minor of the matrix is either 0, +1 or -1. A minor of a matrix is the determinant of a square submatrix formed by eliminating one or more of its rows and columns. 1.2 Problem Definition Given the system model, with an additional constraint that the system is implicit- deadline, find a partition function F : Γ −→ P such that given any optimal uni-processor scheduling algorithm A, all possible schedules produced by A on all partitions generated by F are optimal. 1.3 Objectives The objectives of this work are: • To solve the partitioned scheduling problem using quantum annealing. • To develop a framework for utilization of quantum computation for scheduling real-time systems. • To analyze the effectiveness of the linear programming relaxation method for solving the partitioned scheduling problem. 1.4 Scope of the Work This work considers partitioned scheduling of implicit-deadline, hard real-time, recurrent sporadic task system on unrelated multiprocessor platform and uses quantum annealing and linear programming relaxation methods to find global optimum solution(s) to the problem. Furthermore, susbsequent analysis of the 8 performance of the algorithms, the feasibility and the overall utilization of the schedules have been performed via simulation. 1.5 Originality of the Work The NP-hard nature of the partitioned scheduling problem has prevented evaluation of its global optimum solutions. Significant research effort has been invested to devise polynomial-time approximation algorithms as well as algorithms using metaheuristics for the problem. However, direct evaluation of the optimum solution via quantum annealing has not been conducted. This work has bridged this research gap: quantum supremacy has been observed for the problem over the classical branch-and-bound algorithm with metaheuristic improvements [21] [22]. Additionally, this work introduces the linear programming relaxation methodology which has been shown to be superior to both branch-and-bound algorithm and the quantum annealing algorithm with incredible speedup over them while being feasible over a large range of problem sizes. Furthermore, this work also generalizes the uniprocessor task system generation algorithm to unrelated multiprocessor platform which can serve as an important tool for subsequent research. Also, this work introduces a simulation model for hard real-time task-system simu- lation on unrelated multiprocessor platforms. Although uniprocessor simulators and multiprocessor simulators for identical platforms were readily available, those for unrelated multiprocessor platforms were difficult to come by. The simulation model introduced in this work can be readily reused for subsequent research work. Last, but not the least, this work performs detailed quantitative analysis of the task-system generation algorithm which has validated its stability and also sheds light upon the different aspects of its behavior. This analysis will form a platform for using the algorithm with confidence in subsequent research work. 9 1.6 Organisation of the Work The remaining portion of this document discusses the following. Chapter 2 discusses the progress of research effort in both real-time scheduling and in quantum annealing that have formed a starting point for this work. Chapter 3 discusses the methodologies in detail. Chapter 4 discusses the results of this work and the subsequent analyses. Chapter 5 discusses the conclusion of the work done and provides recommendations for further research. 10 The linear programming relaxation method is a well-known method for finding fractional solutions or bounds on the solution of hard integer linear programs. The relaxation method proposed in this work arose out of a well-understood result in polyhedral combinatorics: the fact that the relaxation of the integer constraints from a linear program preserves integrality only if the coefficient matrix of the relaxed problem is totally unimodular [42] [43]. In general, the relaxation method has been employed in a variety of scenarios such as solving graph cut problems [44] [45], k-means clustering [46], k-medians clustering [47], boolean tensor factorization [48] among others. 13 CHAPTER 3 METHODOLOGY 3.1 Block Diagrams 3.1.1 Using Quantum Annealing Figure 3.1: Block Diagram using Quantum Annealing 3.1.2 Using Linear Programming Relaxations Figure 3.2: Block Diagram using Linear Programming Relaxations 14 3.2 Quantum Annealing Methodology 3.2.1 Task System Generation Task system generation involves defining an implicit-deadline hard real-time re- current sporadic task system Γ = {τ1, τ2, ..., τn}. Each such task has a m-tuple of worst-case-execution-time parameters Ci = (Ci1, Ci2, ..., Cim) on an unrelated multiprocessor platform of m processors. Additionally it has a deadline parameter Di and a period parameter Ti with the added constraint for the system that Di = Ti for an implicit-deadline system as defined in section 1.1.4. A sample of such a task system is illustrated below with n = 5 and m = 4. Task (τi) WCET (Ci) Deadline(Di = Ti) 1 (1,3,6,2) 7 2 (3,5,10,4) 12 3 (2,4,6,5) 6 4 (1,1,2,1) 3 5 (3,3,4,6) 10 Table 3.1: A Sample Implicit-deadline Task System For task system generation, the algorithm as described in [49] has been generalized to the multiprocessor case without the feasibility test since the feasibility test does not generalize with tractability to unrelated multiprocessor platforms. The algorithm has been outlined below. Algorithm 1 Task System Generation Require: σu ∈ R, n ∈ Z+, m ∈ Z+, m < n, Tmin, Tmax ∈ Z+, Tmin < Tmax procedure GenerateTaskSet(σu, n,m, Tmin, Tmax) for i← 1 to n do Ui ← [2, 2...]m Ci ← [0, 0...]m Ti ← ⌊U(Tmin, Tmax)⌋ ▷ Generate period from uniform distribution while ∃Uij > 1 do Uij ← Exp(σu) ▷ Generate utilization from exponential distribution Cij ← ⌊Uij × Ti⌋ end while end for Γ← {(Ci, Ti) | i ∈ {1, 2, ..n}} return Γ end procedure 15 The reduction has been illustrated below. a(i−1)×m+j = Cij Ti (3.9) bij = 0, b (c) ij = 0, b (d) ij = 0 (3.10) r = 0, r (c) 1 = 1, r (c) 2 = −1, r(d) = 0 (3.11) a (1) i = −1, a(2)i = Cij Ti ∀ j (3.12) 3.2.4 Solution of CQM via Quantum Annealing Since an equivalence exists between the CQM model and a QUBO problem, a quantum computer can find the solution for the CQM problem. The CQM solver provided by the D-wave systems quantum computer estimates the KKT multipliers internally, without us having to tune the multipliers ourselves. Upto 1 million decision variables are available for the solution of binary quadratic problems [54]. Hence, after having converted the problem into the CQM form, quantum annealing has been done according to equations 1.2 and 1.4. 3.2.5 Design of Multiprocessor RTOS Simulation Kernel After having obtained the solution of the partitioning problem, the solution has to be converted to equivalent partitioning on some multiprocessor system. For the subsequent analytical work, a multiprocessor simulation kernel has been designed and implemented. The multiprocessor kernel is composed of a set of uniprocessor kernels with separate job queues, as required by the partitioned scheduling approach. The design of the RTOS kernel has been depicted below. 18 Figure 3.3: RTOS Kernel Design 3.2.6 Task Allocation to Processors The solution of quantum annealing is a partition of n tasks to m processors. After having obtained the solution from the quantum computer, the task set Γ has been partitioned accordingly and the partitioned task systems were provided to the corresponding uni-processor kernels for scheduling. 3.2.7 Optimal Uniprocessor Scheduling Although estimating the schedulability of a general task system Γ is intractable upon multiprocessing platforms, it is efficient to verify the schedulability for a given partitioning of Γ. For the Earliest-Deadline-First (EDF) uniprocessor scheduling algorithm [1], it can be verified in linear time using the following test [4]. A partition Γp ⊆ Γ is EDF-schedulable iff ∑ τi∈Γp Ci Ti ≤ 1. (3.13) If the test fails upon any partition, then the partitioning provided by the quantum annealer has violated some constraints. It can mean one of two things: either an infeasible solution has been found by the annealer (due to errors in computation), or the task system has no feasible solution. The former case was addressed via 19 sampling the annealer for multiple runs which would yeild a feasible solution (whether optimal or suboptimal). In the latter case, however, it had to be assumed that the task system has no feasible partitioned schedule (although not provably so). In that case, the designer of the real-time system has to relax his/her design so that a feasible solution can be found. EDF-scheduling on a Uniprocessor The Earliest Deadline First (EDF) scheduling algorithm [55] has been used for scheduling partitioned task system Γp on a particular processor pi. The EDF scheduling algorithm has been shown to be optimal for preemptive scheduling on uniprocessor platforms [56] [55]. Hence, it has been used for this work. 3.2.8 Job Generation After having designed a partitioned schedule for the tasks, the jobs of the tasks were generated and sent to the job queue at different times greater than or equal to the task period. They were then scheduled by the algorithm. There are two non-deterministic variables in the job generation process: the actual execution time ci and the arrival time of the nth job of process τi: a (n) i . These will be modeled as random variables as follows. ci ∼ U(0, Ci) (3.14) b (n) i ∼ Exp(λ) (3.15) a (n) i = b (n) i + a (n−1) i +Di (3.16) where λ is a parameter called the arrival rate which can be adjusted during simulation. 20 Lemma 1.3. (Hoffman and Kruskal, 1956) [43] Let Q(b, b′, c, c′) = {x : b ≤ Ax ≤ b′ and c ≤ x ≤ c′} be a polyhedron in k-dimensional space with b, b′, c, c′ being vectors with components in either Z or ±∞. Q is an integer polyhedron if and only if A is totally unimodular. Lemma 1.4. The polyhedron induced by the constraints (equations 3.24 and 3.25) is an integer polyhedron. Proof. Let b = 1n×1, b ′ = ∞n×1, c = 0mn×1 and c′ = 1mn×1. Clearly all the components of b, b′, c, c′ ∈ Z ∪ ±∞. Since A is totally unimodular, it follows that the polyhedron induced by the constraints is an integer polyhedron. Thus, if a finite optimum solution exists for the LPP, it must occur at one of the faces (or vertices) of the integer polyhedron, which will, by definition be a vector of integers. This proves the theorem. 3.3.2 Solution of Linear Program Once the integer program had been relaxed to the linear program, it was solved using GLOP [22], which is Google’s open source linear programming solver system. It utilizes the revised primal-dual simplex algorithm [57][58], which has been optimized for sparse matrices. This is perfectly suited for this work as the matrix A in the above problem has been seen to be highly sparse. 3.3.3 Feasibility Testing The trade-off of relaxation of the schedulability constraint in the problem definition is the fact that the resulting solution (although integral) may not be feasible upon the platform. Hence, feasibility testing has to be done. If the solution is found to not be feasible, methods to solve the full integer problem (such as the quantum annealing method) have to be employed. Since the optimal EDF algorithm [1] has been considered for this work, the feasibility testing was performed according to equation 3.13 as described in section 3.2.7. 23 CHAPTER 4 RESULTS AND ANALYSIS 4.1 Intermediate Results 4.1.1 Uniprocessor Scheduling Results Initially, the uniprocessor RTOS was designed and implemented. The results of uniprocessor scheduling for certain task systems upon the RTOS are depicted below. 1. A three-task system simulated for 100 time steps Task Worst-case Execution Time (WCET) Relative Deadline 1 4 10 2 5 26 3 6 17 Table 4.1: Task System Parameters Figure 4.1: Schedule generated by Uniprocessor EDF algorithm for 100 time steps. The numbers at the top of the bars indicate the job number of the corresponding task. 24 2. A ten-task system simulated for 100 time steps Task Worst-case Execution Time (WCET) Relative Deadline 1 2 10 2 3 26 3 2 99 4 4 60 5 7 45 6 3 100 7 4 50 8 4 65 9 6 80 10 3 92 Table 4.2: Task System Parameters Figure 4.2: Schedule generated by Uniprocessor EDF algorithm for 100 time steps. The numbers at the top of the bars indicate the job number of the corresponding task. 25 We have, 0 ≤ (1− e−σu) ≤ 1 (since it is a probability) or, 0 ≤ (1− e−σu)m ≤ 1, (m > 0) or, 0 ≥ −(1− e−σu)m ≥ −1, (m > 0) or, 0 ≤ 1− (1− e−σu)m ≤ 1 (m > 0) or, 0 ≤ (1− (1− e−σu)m)k ≤ 1 (m, k > 0) Thus, lim k→∞ ( 1− (1− e−σu)m )k = 0, so P (H) = lim k→∞ 1− ( 1− (1− e−σu)m )k = 1 (4.7) Theorem 3. Algorithm 1 has an average runtime of O ( nm (1−e−σu )m ) . Proof. We have, the expected value of the geometric random variable X: E[X] = 1 p (k) i = 1 (1− e−σu)m (4.8) This is the expected number of steps for which the while loop has to be run. Since each test of the while loop takes an average of m 2 steps (linear search) and the outer loop has to be run for n steps (1 for each task), the average runtime of algorithm 1 is O ( nm (1−e−σu )m ) . Corollary 3.1. Algorithm 1 has an average runtime linear in m at the limit σu →∞. Proof. We have, lim σu→∞ nm (1− e−σu)m = nm (1− 0)m = nm (4.9) It has to be noted that even at moderate values of σu, the function m (1−e−σu )m is almost linear in m. So, a very fast average performance (linear in the problem size nm) can be expected from the algorithm as is evident from the graphs below. 28 Figure 4.4: Variation of the function f(m) = m (1−e−σu )m with σu 4.2.3 Quantitative Analysis of Task System Generation Metrics used for Analysis Certain metrics derived from the parameters of the tasksystem generation algorithm that have been used for analysis are discussed below. Problem Size The problem size indicates the number of binary variables required to represent the task system partitioning problem as an integer linear program. It is calculated as: s = n×m (4.10) where s is the problem size, n is the number of tasks and m is the number of unrelated processors. 29 Problem Complexity Since the task partitioning problem is essentially a bin-packing problem with processors as bins and tasks as items, a quantitative characterization of the hardness of the problem can be derived. Problem complexity is calculated as: c = n m (4.11) where c is the problem complexity, n and m are number of tasks and number of processors respectively. Intuitively, higher the problem complexity, the more difficult it is to pack the bins. Scale of the Exponential Distribution The scale parameter of the exponential distribution used to generate utilization values for the tasks is an important simulation parameter as it directly controls the probability of generating task systems with different utilization behavior. It is denoted by σu ∈ R and it dictates the ranges of overall system utilization of the generated task systems. Feasibility Ratio The feasibility ratio of a population of k task systems where r of them are feasible upon the given multiprocessor platform is given by: fr = r k (4.12) where fr denotes the feasibility ratio. Intuitively, with the other parameters re- maining fixed, the feasibility ratio describes the efficiency of task system generation algorithm. Deadline Range The deadline range is another metric of the hardness of the partitioning problem. 30 Figure 4.6: Problem Complexity vs Feasibility Ratio at various values of Scale of Exponential Distribution. The results indicate that lowering the value of the scale of the exponential distri- bution improves the feasibility ratio. This result matches the expected behavior since at lower values of σu, task systems with better utilization behavior are likely to be generated. 33 Figure 4.7: Problem Complexity vs Feasibility Ratio at various values of Deadline Range. The results indicate that increasing the deadline range improves the feasibility ratio. This is the expected behavior since the generator has a greater degree of freedom in selecting deadlines when the deadline range is larger and the likelihood of selecting deadlines that induce feasible task systems increases. Problem Complexity vs Overall Utilization Analysis The worst-case utilization of tasks is one of the random variables that is modeled by the task system generation algorithm. However, the actual utilization can only be observed after having performed a complete simulation of the system. The simulation analysis of the overall system utilization is presented below. The system has been simulated for 100 logical time-steps with the arrival rate λ = 2.0 to generate these results. 34 Figure 4.8: Problem Complexity vs Overall Utilization at various values of Problem Size The results show a general trend of increasing overall system utilization upon increasing the problem complexity. This behavior was expected since problems with higher complexity have a need of fitting a greater number of tasks upon a relatively smaller number of processors. Furthermore, at the same complexity, with the increase in problem size, the results show an increase in overall utilization. This is a bit of a mysterious result. However, an investigation upon the relation of problem size with the formulated ILP sheds light on the matter: with the increase in problem size, the number of constraints in the ILP increases, which worsens (increases) the optimal value of the objective (the overall utilization). 35 4.2.4 Quantitative Analysis of LP Relaxation Method Since the LP relaxation method finds optimal integer solutions which may or may not be feasible upon the multiprocessor platform, the first property of the method that has been analyzed is the feasibility ratio of a general population of tasksystems when solved using the LP relaxation method against that using direct solution methods (such as the quantum annealing method). The feasibility ratio that contrasts the performance of LP relaxation method against the direct methods is formalized as the following system parameter: LP-feasibility ratio The LP-feasibility ratio of a population of task systems where r of them are deemed feasible by a direct method (for instance the quantum annealing method) and h of them deemed feasible by both direct methods and by LP relaxation method is given by: lpfr = h r (4.14) Intuitively, the LP-feasibility ratio depicts the proportion of tasksystems that is correctly solved by the LP relaxation method even when the relaxed constraint is taken into consideration. High values of the LP-feasibility ratio would indicate high degree of usability and reliability of the LP relaxation method. The analytical results depicting LP-feasibility ratio at various values of the system parameters have been presented below. 38 Figure 4.11: Problem Size vs LP Feasibility Ratio at various values of Problem Complexity Figure 4.12: Problem Size vs LP Feasibility Ratio at various values of Scale of Exponential Distribution 39 Figure 4.13: Problem Size vs LP Feasibility Ratio at various values of Deadline Range The results indicate a remarkable property of the LP relaxation method: that the proportion of the tasksystem population that can be solved with feasibility by the LP relaxation method tends to 1 as we increase the problem size, all else being constant. It means that the LP relaxation method can be reliably used in practice and the speedup it offers over its counterparts (evident from the subsequent runtime analysis) makes it the preferred method for solving the partitioned scheduling problem. 4.2.5 Quantitative Analysis of Algorithm Runtimes The primary benefit of using quantum annealing for solving partitioned scheduling problems is the speedup it offers over its counterparts. Google’s implementation of the branch-and-bound algorithm for solving ILPs [21] [22] has been used for comparison. It is a metaheuristic algorithm that is surprisingly more powerful than the original branch and bound algorithm as are evident from the results that follow. Furthermore, the LP relaxation method has also been analyzed and the 40 CHAPTER 5 CONCLUSION AND FUTURE WORK 5.1 Conclusion This work has successfully achieved partitioned scheduling of implicit-deadline, hard real-time, recurrent, sporadic task systems on unrelated multiprocessor task systems using quantum annealing and LP relaxation methods. The LP relaxation method has been shown to be far superior to both quantum annealing and branch and bound algorithm with metaheuristic improvements. Quantum supremacy of quantum annealing has been demonstrated over the branch-and-bound algorithm with metaheuristic improvements. Furthermore, the proposed algorithm for multiprocessor task system generation on unrelated platforms has been shown to be fast, stable and to reliably produce feasible task systems within a certain range. This algorithm, along with knowledge about its properties obtained from the analytical results, can be used for further research work in the field. Finally, an end-to-end framework has been established for simulation study of the application of quantum as well as classical algorithms for scheduling of hard-real time multiprocessor systems. 5.2 Future Work The success of the LP-relaxation method has paved a way to analyze more complex models of real-time systems that may arise in practice. This work has shown that the NP-hard barrier of computation can be circumvented for the particular model using the relaxation method. Similar solutions may be found on other models and their applicability in practice can be analyzed in future research endeavours. 43 REFERENCES [1] Sanjoy Baruah, Marko Bertogna, and Giorgio Buttazzo. Multiprocessor scheduling for real-time systems. Springer, 2015. [2] Sudhakar Sah and Vinay G Vaidya. A review of parallelization tools and introduction to easypar. International Journal of Computer Applications, 56(12), 2012. [3] Robert I Davis and Alan Burns. A survey of hard real-time scheduling for multiprocessor systems. ACM computing surveys (CSUR), 43(4):1–44, 2011. [4] Pontus Ekberg and Sanjoy Baruah. Partitioned scheduling of recurrent real- time tasks. In 2021 IEEE Real-Time Systems Symposium (RTSS), pages 356–367. IEEE, 2021. [5] Omar U Pereira Zapata and Pedro Mejıa Alvarez. Edf and rm multiproces- sor scheduling algorithms: Survey and performance evaluation. Seccion de Computacion Av. IPN, 2508, 2005. [6] Tadashi Kadowaki and Hidetoshi Nishimori. Quantum annealing in the transverse Ising model. , 58(5):5355–5363, November 1998. [7] A. B. Finnila, M. A. Gomez, C. Sebenik, C. Stenson, and J. D. Doll. Quantum annealing: A new method for minimizing multidimensional functions. Chemical Physics Letters, 219(5):343–348, March 1994. [8] Silvano Martello and Paolo Toth. Bin-packing problem. Knapsack problems: Algorithms and computer implementations, pages 221–245, 1990. [9] Farhang Nemati. Partitioned scheduling of real-time tasks on multi-core platforms, May 2010. [10] Christos H Papadimitriou and Kenneth Steiglitz. Combinatorial optimization: algorithms and complexity. Courier Corporation, 1998. 44 [11] Aloysius Ka-Lau Mok. Fundamental design problems of distributed systems for the hard-real-time environment. PhD thesis, Massachusetts Institute of Technology, 1983. [12] Chung Laung Liu and James W Layland. Scheduling algorithms for multi- programming in a hard-real-time environment. Journal of the ACM (JACM), 20(1):46–61, 1973. [13] Ernst Ising. Contribution to the theory of ferromagnetism. Z. Phys, 31(1):253– 258, 1925. [14] William M Kaminsky and Seth Lloyd. Scalable architecture for adiabatic quantum computing of np-hard problems. Quantum computing and quantum bits in mesoscopic systems, pages 229–236, 2004. [15] Michael A Nielsen and Isaac Chuang. Quantum computation and quantum information, 2002. [16] Francisco Barahona. On the computational complexity of ising spin glass models. Journal of Physics A: Mathematical and General, 15(10):3241, 1982. [17] Tadashi Kadowaki and Hidetoshi Nishimori. Quantum annealing in the transverse ising model. Physical Review E, 58(5):5355, 1998. [18] Razavy Mohsen. Quantum theory of tunneling, 2003. [19] Mark W Johnson, Mohammad HS Amin, Suzanne Gildert, Trevor Lanting, Firas Hamze, Neil Dickson, Richard Harris, Andrew J Berkley, Jan Johansson, Paul Bunyk, et al. Quantum annealing with manufactured spins. Nature, 473(7346):194–198, 2011. [20] Erwin Schrödinger. An undulatory theory of the mechanics of atoms and molecules. Physical review, 28(6):1049, 1926. [21] Ailsa H Land and Alison G Doig. An automatic method for solving discrete programming problems. In 50 Years of Integer Programming 1958-2008, pages 105–132. Springer, 2010. 45 Ozfidan, Anatoly Yu Smirnov, et al. Scaling advantage over path-integral monte carlo in quantum simulation of geometrically frustrated magnets. Nature communications, 12(1):1–6, 2021. [42] Isidore Heller and Charles B Tompkins. An extension of a theorem of dantzig’s. In Linear Inequalities and Related Systems.(AM-38), Volume 38, pages 247– 254. Princeton University Press, 2016. [43] Alan J Hoffman and Joseph B Kruskal. Integral boundary points of convex polyhedra. In 50 Years of integer programming 1958-2008, pages 49–76. Springer, 2010. [44] Svatopluk Poljak and Zsolt Tuza. The expected relative error of the polyhedral approximation of the max-cut problem. Operations Research Letters, 16(4):191– 198, 1994. [45] David Avis and Jun Umemoto. Stronger linear programming relaxations of max-cut. Mathematical Programming, 97(3):451–469, 2003. [46] Antonio De Rosa and Aida Khajavirad. The ratio-cut polytope and k-means clustering. SIAM Journal on Optimization, 32(1):173–203, 2022. [47] Alberto Del Pia and Mingchen Ma. K-median: exact recovery in the extended stochastic ball model. arXiv preprint arXiv:2109.02547, 2021. [48] Alberto Del Pia and Aida Khajavirad. Rank-one boolean tensor factorization and the multilinear polytope. arXiv preprint arXiv:2202.07053, 2022. [49] Marko Bertogna. Evaluation of existing schedulability tests for global edf. In 2009 International Conference on Parallel Processing Workshops, pages 11–18. IEEE, 2009. [50] Dwave Systems. Constrained quadratic models. Extracted electronically from: https://docs.ocean.dwavesys.com/en/stable/concepts/cqm.html, May 2022. [51] Gary Kochenberger, Jin-Kao Hao, Fred Glover, Mark Lewis, Zhipeng Lü, Haibo Wang, and Yang Wang. The unconstrained binary quadratic program- 48 ming problem: a survey. Journal of combinatorial optimization, 28(1):58–81, 2014. [52] Harold W Kuhn and Albert W Tucker. Nonlinear programming. In Traces and emergence of nonlinear programming, pages 247–258. Springer, 2014. [53] William Karush. Minima of functions of several variables with inequalities as side constraints. M. Sc. Dissertation. Dept. of Mathematics, Univ. of Chicago, 1939. [54] Dwave Systems. Hybrid solver for constrained quadratic models - dwavesys.com, May 2022. [55] Giorgio C Buttazzo. Hard real-time computing systems: predictable scheduling algorithms and applications, volume 24. Springer Science & Business Media, 2011. [56] Michael Dertouzos. Control robotics: The procedural control of physical processes. In Proc. IFIP congress, pages 807–813, 1974. [57] George B Dantzig. Origins of the simplex method. In A history of scientific computing, pages 141–151. 1990. [58] Steven S Morgan. A comparison of simplex method algorithms. PhD thesis, University of Florida, 1997. 49
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved