Skip to content

Predictive Analytics Using Quantum Computing

Predictive Analytics Using Quantum Computing

Despite the large amount of literature that currently exists which the research is able to draw upon, the primary gap in knowledge that needs to be filled is expressed in the research question: in what ways can artificial intelligence based predictive analytics be advanced by quantum computing?

Methodology and Research Strategy:

Despite the fact that some experts in the fields of predictive analytics, AI, and quantum computing have hopes that quantum computing will someday enable computer technology to produce predictive analytics reports at the same performance levels as is found among human analysts, no one truly knows if quantum computing will be capable of advancing computer technology to that particular capacity.

In order to answer the research question being pondered in this research, the research’s associated hypothesis will be tested by the operationalization of independent and dependent variables. The impact of the independent variable upon the dependent variables will be measured in order to prove or disprove the proposed research’s hypothesis.

The independent variable used in the research is that of quantum computer technology that is being applied or is believed possible to be applied to AI designs that could be used in the field of predictive analytics. The dependent variables used in the research are computer-based problem-solving capabilities, probability outputs, and machine learning capabilities that can be applied to the field of predictive analytics.

Upon the event that the research finds that the independent variable significantly impacts the dependent variables examined in the research, then the hypothesis will be found as supported. However, if the impact of the independent variable upon the dependent variables is found to be insignificant/negligible, then the hypothesis used in the research will be found to be unsupported. The variables will be operationalized through the examination and analysis of selected case studies.

The research’s control group case studies will concern advanced classical computer technology. The test group case studies will concern the problem-solving skills, probability outputs, and machine learning capabilities of quantum computer technology that can be used in the field of predictive analytics. The impact of the independent variable upon the dependent variables will be measured in the research by the examination and subsequent analysis of any performance enhancement that is evident with the quantum computer technology when compared to the non-quantum based computer technology examined in the control group case studies.

The test group will entail selected case studies concerning quantum computer technology being utilized in computer performance that could be used in the field of predictive analytics. The control group will entail the examination of case studies that concern computer technology performance that is non-quantum based that can be used in the field of predictive analytics. Qualitative and quantitative measurements may be applied in the research’s operationalization of the variables.

The case studies used in the operationalization of the variables will be collected for the purposes of the research from such sources as academic journals, articles, interviews, reports, as well as other reliable and trusted sources of information that fall into the parameters of the research’s case studies. Every case study examined will be selected on the grounds of being compatible and applicable to the operationalization of the variables. The sources will then be sorted into the appropriate cases; examination and analysis of the case studies will be possible after the selection and organization phases are completed.

The case studies will be analyzed in order to operationalize the variables, and thus test the research’s hypothesis. The hypothesis has been devised in a manner that will allow for proper testing to take place, and through that testing, the hypothesis will be supported or found to be unsupported. The research places forth the following hypothesis: despite computer-based predictive analytics technology currently being only capable of producing reports that are of a quality which lack the analytical problem-solving capacities of human analysts, the development and subsequent applications of quantum computing technology will bring computer-driven predictive analytics to at least the same level of analytical problem-solving capabilities as is routinely offered by human analysts.

When the hypothesis has been properly tested, the most advanced computer technology that is being used or is emerging but is most probably applicable to the field of computer-based predictive analytics will be considered in light of the tested hypothesis. The results found after testing the hypothesis and the consideration of how those results will have relevance upon the most advanced computer technology that is being developed that can be used in predictive analytics, will then lead to answering the research question: in what ways can artificial intelligence based predictive analytics be advanced by quantum computing?

The testing of the research’s stated hypothesis will allow for answers to be reached in regard to the research question. Certain limitations and biases are expected to accompany the case studies themselves. Validity problems that may limit the proposed research may be in the form of market realities that encourage the positive acclamation of the quantum computer performances on the part of the authors and reporters (so as to increase future funding); test failures are less likely to be published or reported upon when compared to successes in emerging computer technology tests.

The study will face limitations in regard to the fact that emerging computer technology such as quantum computing and very recent developments in other computer-based predictive analytics technology is still only in testing phases of development; therefore, the true potential and limitations of the technology that will be examined and analyzed in the case studies cannot be fully known at the time of the operationalization of the research’s variables.

Findings and Analysis:

In order to operationalize the variables for the purpose of testing the hypothesis, the research turns to the examination and analysis of the selected case studies. The first case study concerns comparison tests between a quantum computer design known as the D-Wave and classical computer technology.

The need for a separate section examining the control group case study does not exist in regard to case study one; this is because of the fact that the research is able to treat the performance results of the quantum computer design examined in the first case study as the test group, and the performance results of the classical computer technology that was tested against the D-Wave serves the purpose of the de facto control group.

The second case study entails the test results of a different quantum computer design that happens to function on only four quantum bits; that particular quantum computer design is taken into consideration alongside a control group case study that involves the performance results of a revolutionary classical computer design that was tested to perform similar tasks as the four quantum bit computer has been designed to perform.

Case Study One:

The D-Wave quantum computer will be examined in terms of any potential quantum speed-up that the machine may be able to exhibit. The question of whether or not a quantum computer can outperform classical computer technology is very useful in terms of operationalizing the variables used in the research. In order to appreciate the test results, a general understanding of how D-Wave operates is necessary.

Classical computer technology is not limited to only one design method; likewise, quantum computers can be invented with a diversity of designs. The logical assumption is that some quantum computer designs will be more effective than others; according to Metz (2015), D-Wave Systems is a quantum computer company that has partnered with Google and NASA in order to develop quantum computer technology.

D-Wave critics have argued that the machine is not truly a quantum computer model; supporters of D-Wave have countered that D-Wave is a quantum computer because the system runs on quantum annealing. D-Wave cannot be used for a broad range of computer applications. The quantum annealing features of D-Wave limit the machine’s usage to optimization puzzles; an optimization puzzle for which D-Wave is designed to solve for the user is a scenario in which a certain destination may be reached by a very large amount of paths, and one of those paths is the fastest route to the destination.

D-Wave is able to analyze the many different routes to a particular destination and determine which way is the most optimal one to take; D-Wave uses machine learning principles in order to optimize performance by determining the fastest path to take in order to reach a destination (Metz 2015).

Therefore, a chief limitation to the functionality of D-Wave is the very limiting reality that the machine has been designed to specifically handle optimization problems; nonetheless, if D-Wave is capable of performing quantum speed-up, then such an ability could certainly be applied to the field of predictive analytics. Optimization problem-solving capabilities may be applicable to problem-solving that is inherent to the development of AI.

There is a lot of other considerations and capabilities that would have to be considered before true AI can be invented; according to Metz (2015), classical computation utilizes binary language that reads in ones and zeroes; when the switch is yes, then a one is represented, but when a switch is no, then a zero is represented.

Quantum computers are the same as classical computers in regard to binary language, but quantum computers use quantum bits, or simply qubits. Binary language processing with the use of qubits functions in a way that while a qubit is in a superposition, the quantum bit is both a one and a zero.

Qubits being used in binary language allows for four times the amount of information being saved on the computer during the superposition state of the qubits, and when the state of de-coherence takes place, the qubit will then only represent one binary state which can be read as a concrete number by the CPU (Metz 2015). Therefore, the manipulation of quantum particles may allow binary language computing to achieve quantum speed-up in a way that classical computer technology never could manage to perform.

Quantum computers such as the D-Wave could perform binary-based functions in order to carry out calculations much more effectively than classical computing because of the fact that super-positional states allow for more work to be performed at the same time. The designers of D-Wave may or may not have chosen the best method of attempting quantum speed-up by way of qubit operations; according to Metz (2015), the D-Wave does not use qubits as semi-conductors; rather, the qubits function as superconductor based transistors.

D-Wave qubit transistor functions in cold temperature and constantly sends energy in two different directions that form a ring; algorithms are run via the superconducting qubit rings in order to calculate the answers to optimization problems. Modern computer technology is designed in a way that utilizes processors without using up very much energy, but D-Wave does not consume very much energy, and is thus able to consume energy without constraints that are related to traditional CPU functions; most of the energy consumption considerations associated with D-Wave is in regard to the machine’s cooling system.

The D-Wave System is potentially faster than traditional computer technology, but D-Wave’s performance has been somewhat of a disappointment thus far; D-Wave has not exhibited the incredible processing speeds that experts had anticipated observing from the machine (Metz 2015). The utilization of super-conductors as a form of qubit has been the mainstay of the D-Wave; such a design has allowed for quantum computation to take place via the phenomenon known as quantum annealing.

However, the true performance of the D-Wave can only be learned through vigorous testing.
Quantum speed-up cannot be achieved if the computer in question is not really a quantum machine. The designers of D-Wave can now address any critics on that particular question concerning the D-Wave and the super-conductor design; according to Brandom (2014), Google’s D-Wave laboratory discovered that the D-Wave II system was running in the capacity of a genuine quantum computer, and that discovery bolstered D-Wave Systems’ marketing claims that the machine’s functions were in fact quantum-based (Brandom 2014).

The D-Wave developers have been able to silence critics that had been alleging that D-Wave was merely a pseudo-quantum machine. The use of quantum annealing through the running of super-conducting technology is a valid method of qubit design; however, the question as to whether the D-Wave is able to achieve quantum speed-up is another issue to be addressed.

The comparison of the single D-Wave computer against multiple computers working in unison as a cluster is a method of discovering if any quantum speed-up is exhibited by D-Wave over the classical processor technology; according to Brandom (2014), D-Wave, which sells at fifteen million dollars per computer, exhibited somewhat of a disappointing performance during trials. Microsoft researchers worked in collaboration with Google in order to test D-Wave against classical computer technology; D-Wave was tested in the performance field in which the quantum machine is designed to hold a comparative advantage.

D-Wave was tested in terms of performance against Microsoft Research’s top of the market classical computer clusters, and the results showed that D-Wave displayed no obvious performance advantage against the classical computational techniques used in the trials. Quantum computers such as D-Wave are under development in order to achieve quantum speed-ups, which give the computer notably enhanced speeds, which can be of assistance to users that are running large amounts of data to be processed.

D-Wave’s quantum annealing functionality may or may not be able to achieve significant quantum-speedups; while the speed tests between D-Wave and Microsoft’s classical clusters did not in any way show that D-Wave was not exhibiting quantum speed-ups, any quantum speed-ups may have been offered by D-wave during the tests would have been minimal; because the D-Wave System did not offer exponentially faster processing speed through quantum annealing, the high cost of creation for D-wave computers may not be financially practical.

The D-Wave trials did not show that D-wave was in any way inherently prevented from faster speeds than had been shown in the tests; in fact, the pioneering D-Wave computer that was tested against Microsoft’s top of the market computer clusters proved to be as fast as those more commercially developed products (Brandom 2014). The test that entailed a comparison between D-Wave and Microsoft’s classical computer clusters does not seem to indicate any real signs of significant quantum speed-up; however, D-Wave has proven to be an advanced computer model.

The D-Wave has already been developed and invested in heavily, and the option to dramatically shift away from the use of quantum annealing towards a different form of quantum computer method would most likely not be an option for D-Wave’s developers. The uncertainty does remain over whether or not the D-Wave can actually achieve quantum speed-up, the developers of that particular quantum machine may certainly want to go back to the drawing board and find out if the D-Wave can be better designed and specified in order to perform better in any similar tests.

Quantum annealing may be a useful method of quantum computing, but at the early stages of the D-Wave’s existence, there does remain some serious doubt as to the real possibility of significant quantum speed-ups via quantum annealing. The tests have not necessarily disproven the possibility that D-Wave is able to exhibit quantum speed-up over classical computer technology.

Detractors of quantum annealing being utilized in a quantum computer could argue that if such speed-up is even performed at all by D-Wave, the advantages are minimal and difficult to observe; according to Ronnow, Wang, Job, Boixo, Isakov, Wecker, Martinis, Lidar, and Troyer (2014), the test that compared D-Wave against the performance of classical computer technology in the year of 2014, indicates that the D-Wave failed to perform faster than the classical computer technology used in the comparison tests.

The model of the quantum computer used in the tests was the D-Wave II, which runs on slightly over five-hundred qubits. In terms of the overall performance of computational speeds for the D-Wave during the tests, the quantum machine failed to outperform the classical computer technology; however, some controversy exists in regard to whether or not the D-Wave was faster than the classical computer technology in terms of specific periods of time during the speed trials (Ronnow, Wang, Job, Boixo, Isakov, Wecker, Martinis, Lidar, and Troyer 2014).

Quantum speed-up may be very difficult to observe and appreciate on many occasions; this problem should not be a surprise to test teams, since the phenomenon that takes place in the realm of quantum physics are often very mysterious and may be difficult to associate with empirically tested computer speeds.

The researchers behind the D-Wave test are able to safely declare that the overall performance of the D-Wave was not better at solving optimization problems than the classical computer clusters; however, the D-Wave managed to match those classical clusters in terms of speed. The designers of D-Wave decided that certain improvements could be made on the quantum machine, and such advances led to a new model.

D-Wave developers were not necessarily unable to perform new tests after the disappointing showing from D-Wave against Microsoft’s classical computer clusters; according to Denchev, Sergio, Boixo, Isakov, Ding, Babbush, Amelyanskiy, Martinis, and Neven (2016), processing speeds can be enhanced through the use of a quantum annealing phenomenon known as finite range tunnelling.

The D-Wave Two-X has demonstrated an ability to solve tunnelling problems that are related to walls of energy; this type of computational problem can also be simulated on classical CPUs; the D-Wave Two-X has exhibited higher performance during the comparison tests between classical computer technology and the new version of the quantum machine.

A core processing unit ran simulated tunnelling tests during a comparison trial to D-Wave Two-X, with both sides of the competition running problems using heuristic techniques in order to determine the best way to tunnel through the energy walls in the given problems.

The D-Wave Two-X has been proven in the tests to be more efficient than the classical computer technology that was used as a technological rival in the recent speed trials. The tests of the D-Wave Two-X involved an algorithm known as Quantum Monte Carlo, which can be run in a simulated manner on classical computer technology.

The D-Wave was found to be faster than the classical computer technology when the Quantum Monte Carlo algorithm was used in the tests; however, the researchers acknowledge that different algorithms can be used by classical computer technology that allows the non-quantum CPUs to match the performance of the D-Wave Two-X (Denchev, Sergio, Boixo, Isakov, Ding, Babbush, Amelyanskiy, Martinis, and Neven 2016).

Therefore, the D-Wave performed better after the upgrade to the D-Wave Two-X; however, some validity claims could potentially be raised over the results of the test. The D-Wave developers may have decided to specifically run the Quantum Monte Carlo algorithm in the test in order to ensure that the new model of D-Wave had the best chances of out-performing the classical computer technology.

Case Study Two: Test Group:

The D-Wave is not the only functioning quantum computer that exists in the world today. A team of computer scientists have invented the quantum computer that works on a qubit design that is different than the superconducting method utilized by the D-Wave.

Quantum speed-up was not the subject interest when the new quantum computer was tested; according to the source titled the First Demonstration of Artificial Intelligence on a Quantum Computer (2014), the qubits in the quantum computer exist by way of a molecular structure that is bonded to fluorine, iodine, and carbon, with one of the carbons being a number thirteen isotope.

The computer’s quantum bits function by being immersed in magnetic energy so that a synchronized spin can be established for the purpose of creating binary language, and the spin can be manipulated to go the other direction by means of a radio wave.

Zhaokai Li’s team of computer scientists are even capable of manipulating each nucleus that exists inside the quantum bit molecule by means of specifically attuned wave frequencies that are intended to influence only one nucleus in the quantum bit molecule; with the concert of combined spins that take place between the nuclei allow the molecule to function in the capacity of a quantum logic gate.

The one carbon atom that is a thirteen is observed by the scientists in order to ascertain if the computer has determined a character image to be a six or a nine; a spike upward indicates a six and a spike going downward shows that the computer has determined the character to represent a nine.

Li’s team has managed to invent a quantum computer that adequately determines the difference between the character six and the character nine, and the computer is capable to correctly perform that task even if the characters are handwritten; the quantum computer’s capability to perform machine learning in the character test is believed by Li and his team to be applicable to the processing of Big Data (First Demonstration of Artificial Intelligence on a Quantum Computer 2014).

The quantum computer that was designed by Li’s team may in fact be the most advanced quantum computer that has yet been created. The design may be very advanced, but the capabilities of a computer that only functions on a very small amount of quantum bits may be inherently limited. The running of Li’s computer in order to exercise tasks that require artificial level capabilities is perhaps the best method to test the new quantum computer’s learning capabilities.

The benefits of quantum bit design can be readily understood when the results of tests are taken into account. The objective during the tests of the quantum computer was to determine if machine learning was taking place during the time that the machine was processing information; according to Li, Liu, Xu, and Du (2015), the character for six were categorized as positive, and the character for the number nine was characterized as negative during the conditioning of the four-qubit computer during the preliminary phase of the recognition test.

Handwritten characters were input into the quantum machine in a capricious manner in order to ascertain the performance capabilities of the SVM and the associated feature vectors; which are expressed in mathematical equations that relate to the numerical images of six and nine. The four-qubit computer functions by running quantum particles that function via rotating at the kernel level of the computer’s circuit.

The first two qubits are utilized in order to find a matrix, and during that process, the training information is recorded before the kernel matrix is established to be the same as the beginning qubit’s density. The four qubit computer is then able to classify through the use of gates.

The four-qubit machine’s optical character recognition is made known to researchers by way of a spike that is noticeable in the computers information based spectrum, which allows the computer’s users to understand that a positive classification and recognition of a numerical character on the part of the computer has occurred (Li, Liu, Xu, and Du 2015).

The test results strongly indicate that some rudimentary machine learning was taking place during the time that the four qubit quantum computer was operating. The ability to differentiate between a six and a nine is not a particularly complex task for a human being; however, the fact that the computer would be able to determine the difference between two characters that happen to be handwritten is a significant achievement and shows that Li’s team has invented a useful and working the quantum computer.

The design of the quantum computer relies on a step process; the first inputs of information will have an impact on how the computer handles new inputs of information, and in this way, the computer is able to properly function; according to the source titled the First Demonstration of Artificial Intelligence on a Quantum Computer (2014), the computer scientists in the team led by Zhaokai Li are affiliated with the University of Science and Technology, and the quantum computer that Li’s team created is able to recognize handwritten characters through comparing the handwriting to a template.

The template of characters that the quantum computer uses is constructed by images being input into a scanner, at which point the quantum computer calculates the pixel concentration of the input image so as to record a vector for the image of a numerical character. The quantum computer is then able to mathematically calculate the differences between the pixel concentrations between the character that represents six and the character that represents nine; the mathematical distinction between the six and nine characters in the computer’s recorded template (First Demonstration of Artificial Intelligence on a Quantum Computer 2014).

Therefore, machine learning is able to be performed on the four quantum bit based computer. The foundations of machine learning are laid down when the original information is input into the computer’s database; then, the computer is able to make determinations in regard to any new information that is similar to the old information that already exists inside the computer’s database.

The process of taking in original information and then using that information to differentiate between different pieces of new information is also performed by the human brain; the quantum computer that Li’s team invented appears to be capable of exhibiting a very basic form of artificial intelligence. The difficult part of the distinction process between the characters is when the original template is relied upon to make computer judgment calls regarding new characters that are written by hand.

Handwritten characters may be harder for a computer to understand; according to First Demonstration of Artificial Intelligence on a Quantum Computer (2014), original inputs and subsequent recording of numerical characters allow the quantum computer to then make further distinctions between the characters six and nine during the part in the test that entails new inputs being scanned into and then analyzed by the quantum computer.

The quantum computer is able to divide new images of the characters for six and nine, which unlike the characters used in the template, are handwritten; the distinction between the two different numerical characters is achieved via the recorded numerical calculations of pixels in each of the original images that were input in the quantum computer.

The team that invented to character reading quantum computer used a pioneering quantum-based algorithm designed for learning in order to enable the computer to perform recognition of the images (First Demonstration of Artificial Intelligence on a Quantum Computer 2014). The use of a learning algorithm that is capable of being used in the four-qubit quantum computer clearly indicates that Li’s team has managed to invent a very the dynamic machine.

The current era in quantum computer science may be very young, but the achievements that have thus far taken place are highly encouraging. Quantum computers are currently able to learn and adapt to new information, and there does not appear to be any reason why even more significant advances would not take place in the field of quantum computing.

Greater investment in the development of quantum computers, both in terms of financial funding and human talent would most likely bring about more advancements and useful applications; Li’s team has invented the quantum computer that could likely be useful in the field of predictive analytics.

Case Study Two: Control Group:

Machine learning is not necessarily limited to the realm of quantum computing; indeed, classical computer technology may be capable of performing such tasks. Computer tests that have been conducted in recent years have examined the ability of classical computer technology to perform tasks that require some level of machine learning.

An interview between Amara Angelica and SAIL’s Jurgen Schmidhuber (2012), explains that the Swiss Artificial Intelligence Lab has designed computer technology that is inspired by biological neuron networking; the computer technology that has been designed by the Swiss team has performed well in computer imagery recognition tests because of the technology’s ability to perform machine learning capabilities.

The Swiss Artificial Intelligence Lab won the ICDAR Offline Chinese Handwriting Competition in the year of 2011; the Swiss designers of the winning program were not literate in the Chinese language that was used in the competition, but the recognition program that the team had entered into the competition became the winner due to superior performance (Angelica and Schmidhuber 2012).

The computer technology created by the Swiss Artificial Intelligence Lab is a testament to the effectiveness of neuronal computer designs. Classical computer technology is capable of being designed to learn because of the new techniques that now exist in computer science. Quantum computing is not the only form of computer technology that is currently capable of performing machine learning functions.

The four qubit computer that was created by Li’s team of computer scientists could be compared to the technology that has been invented by the Swiss Artificial Intelligence Lab. Similarities between the two different computer technologies may be found in terms of the methods by which the machines learn; according to Angelica and Schmidhuber (2012), the CPU’s functional design used in the pattern recognition program that the Swiss Artificial Intelligence Lab invented was different than more mainstream computer program models.

The image pattern recognition program designed by the Swiss team utilized graphic cards in order to exponentially improve the accuracy of the program’s performance. The memory of the neuronal inspired program works on both long-term and short-term basis and manages to calculate the probabilities that previous patterns will be encountered in the future (Angelica and Schmidhuber 2012).

The computer technology that has been made from the Swiss Artificial Intelligence Lab and the four qubit quantum machine that was designed by Li’s team both rely upon the intake and analysis of original information in order to make determinations about new information inputs. The computer award-winning technology of the Swiss Artificial Intelligence Lab may only utilize classical computer technology, but the fundamental machine learning philosophy is similar to Li’s quantum computer.Human beings learn from taking in information that will be analyzed in order to make sense of new incoming information, and the neuronal inspired design of the Swiss Artificial Intelligence Lab appears to have invented computers to learn in a similar manner.

The comparison between the learning style of the human brain and the computer technology designed by the Swiss Artificial Intelligence Lab (SAIL) was part of a performance test of one of the Swiss lab’s products. The straight forward method of simply examining the performance levels of a computer against the performance levels of a human being was used in the test; according to Angelica and Schmidhuber (2012), the innovative design of the Swiss team’s program was instrumental in their technology’s winning performance in the Traffic Sign Recognition Competition.

The tests involved pattern recognition tests related to traffic signs and was the eighth pattern recognition competition that the Swiss Artificial Intelligence Lab had won, and demonstrated that the team’s program could outperform human beings in the task of pattern recognition (Angelica and Schmidhuber 2012). The ability for the SAIL’s computer design to perform machine learning allowed that particular model to best human competition in the test; however, the computer model is most likely not as versatile as a human brain, and cannot learn from a diversity of information in the same way as a human brain is capable of learning.

The SAIL’s computer that won the traffic sign competition was very good at learning from traffic sign patterns, but the human brain may be better at learning from a broader range of information, such as patterns found in classical music or facial expressions. The SAIL’s computer technology is very good at learning information that is strictly related to specific tasks, and appears to be superior to human brains in terms of learning on some occasions. The task of recognizing images and then being capable of learning patterns that exist between the images entails the ability to learn.

Human beings are able to learn patterns that exist between images because of the way that the human brain functions; computer scientists are able to learn from the efficiency of the brain in order to design computers that may be able to recognize relationships between images; according to Angelica and Schmidhuber (2012), the Swiss Artificial Intelligence Lab has designed computer technology that is inspired by biological neuron networking.

SAIL’s computer technology has performed well in computer imagery recognition tests because of the technology’s ability to perform machine learning capabilities. The Swiss Artificial Intelligence Lab managed to design a program that bested any of the computer programs and human rivals in a pattern recognition contest; the program designed by the Swiss Artificial Intelligence Lab was over ninety-nine percent accurate during the tests (Angelica and Schmidhuber 2012).

The performance abilities of SAIL’s computer technology is very impressive; the achievement of out-performing other computer competitors in the competition was clearly an indication of SAIL’s innovative and superior computer design. The computer that SAIL designed for the Traffic Sign Pattern Recognition Test was superior to the other computers that performed in the competition; indeed, the model by SAIL proved to be superior to the human competition that had competed in the traffic sign pattern recognition test.

The award-winning computer technology that has been made by SAIL has been able to soundly demonstrate the ability of computer technology to perform machine learning functions. The ability for a computer to recognize patterns related to images appears to now have reached the performance levels of a human being’s brain, or perhaps have slightly surpassed some abilities which the human brain is capable of performing.

The reasons behind why SAIL’s computer is able to outperform other computer technology and even match or exceed the human competition in the specific pattern recognition test is most likely because of SAIL’s new and innovative design, which was invented to function in a manner similar to the neuronal firings of the human brain; according to Markoff (2012), the Swiss Artificial Intelligence Lab’s technology competed in a contest in order to recognize patterns of road signs.

The computer program made by the Swiss team managed to successfully identify patterns found in fifty-thousand images with over ninety-nine percent accuracy; the best human that took part in the same competition also managed to recognize patterns with just over ninety-nine percent accuracy. The Swiss Artificial Intelligence Lab designed the computer technology that won the image pattern recognition contest in a manner that emulates a brain, and this allows for a machine powered deep learning to take place (Markoff 2012).

The human brain’s neuronal functions appear to be a very effective model to mimic if computer scientists intend to further advance the abilities of computers to perform machine learning tasks. The computer technology appears to be capable of performing machine learning in a manner that is not entirely dissimilar to the four qubit computer that was designed by Zhaokai Li’s team; however, certain factors need to be taken into consideration, both Zhaokai Li’s four-qubit computer and SAIL’s neuronal classical computer are examples of newly emerging computer technology.

The four qubit computer is an example of quantum machine learning, and the SAIL’s computer technology is an example of neuronal computing, and the fact that both computer designs are able to learn is an indication that quantum computing and neuronal computing are able to perform machine learning. The examination and analysis of the case studies indicate that the independent variable has a reasonably strong impact upon the dependent variables; in other words, the operationalization of the variables shows that the independent variable positively influences the dependent variables.

The results found after the operationalization of the variables support the hypothesis; however, the impact of the independent variable was somewhat moderate, and therefore, limits the scope of the hypothesis’s expectations. The hypothesis is found to be supported, but certain inherent limitations are found to exist concerning quantum computer technology’s advancement of computer-driven predictive analytics.

The D-Wave’s quantum computational ability to solve optimization problems is at least as efficient as the most advanced classical computer technology; the fact that a single computer running on merely roughly five hundred quantum bits could match the processing power of entire classical computer clusters is a very strong indication that quantum computing technology is far superior in certain respects over classical processing models.

Classical computer technology has been under incrementally advancing development for decades, and computer engineers have become very skilled at designing such technology; quantum computer technology, on the other hand, has only recently emerged beyond prototype stages of development. The fact that a relatively recently developed quantum computer model such as D-Wave could match the performance of some of the best classical computer clusters clearly shows that quantum computer technology is the next step in computer evolution.

The optimization problems that D-Wave has been invented to solve can be applied to the intelligence field of predictive analytics. The optimization answers that the D-Wave is capable of providing after analyzing large amounts of information would be very useful when analyzing Big Data; however, because of the fact that the D-Wave is limited in scope to only be able to solve certain types of optimization problems, that particular quantum computer is incapable of solving a variety of Big Data problems that do not pertain to optimization problems.

The diversity of Big Data that must be analyzed in the field of predictive analytics is very large, and the D-Wave is only able to solve problems for a narrow spectrum of information found in Big Data. Superconducting qubits that utilize the phenomenon of quantum annealing may only be able to perform certain functions; that being considered, some basic AI functionality may be offered by the D-Wave computer design.

The ability of the D-Wave to perform optimization solutions entails that the quantum computer possesses some rudimentary problem-solving skills, which is a basic component of artificial intelligence. New versions of the D-Wave that would run on more qubits than the current models, and are more sophisticated utilization of machine learning algorithms would most likely lead to the future version of the D-Wave being able to continue to achieve even higher levels of processing power.

In the future, better versions of D-Wave may periodically be developed; thus, Moore’s Law would continue to be proven correct. The possibility of ever-advancing versions of D-Wave would offer quantum computer technology the ability to increasingly perform at the level of artificial intelligence; likewise, such AI level capabilities would give future models of D-Wave a place in the field of predictive analytics that match or exceed the problem-solving abilities of human beings in certain areas of analysis.

New models of D-Wave would be able to match or even exceed human beings in certain areas of analysis that is applicable to predictive analytics; nonetheless, more advanced D-Wave models would be inherently limited to specific roles as an artificially intelligent analyst. Human beings would still be able to outperform future D-Wave models in a wide range of analytical tasks that pertain to the field of predictive analytics; therefore, both human analysts and future models of D-Wave would benefit by the synergy that would come through the combined effort of analyzing information in order to produce predictive analytics reports.

The four qubit quantum computer that was invented and tested by the team led by Zhaokai Li appears to be capable of machine learning; the computer is able to analyze imagery input data and then take into account that inputs relevant details in order to make determinations about new but similar inputs, and in doing so is able to perform a machine learning task. The neuronal inspired computer technology that has been designed by the SAIL is classical computer technology, and is not a quantum computer; however, the SAIL’s neuronal computer design is able to learn from previous imagery inputs in a manner that is not dissimilar to Zhaokai Li’s quantum computer.

The machine learning capabilities that have been displayed by the SAIL’s classical but neuronal inspired computer technology and Zhaokai Li’s four qubits quantum computer both display artificial intelligence related skills that are applicable to the field of predictive analytics; indeed, Big Data that is often relevant to the field of predictive analytics is many times in the form of imagery. The ability for computers to learn from previous images that exist in Big Data and then make determinations about new but similar data would allow for more sophisticated predictive reports being produced.

The quantum computer designed by Zhaokai Li’s team and the neuronal computer technology designed by the SAIL could feasibly be combined into a form of computer network or cluster in the future, and this combination of technology would allow for an increased level of artificial intelligence. The ability for computer technology to associate relationships of similar imagery that exists in a sea of Big Data, and then further make determinations in regard to such imagery by recognizing patterns would greatly assist the effort to produce predictive reports.

Quantum computer technology that is capable of machine learning and happens to operate on an advanced neuronal design would essentially be an artificially intelligent analyst; however, the case studies indicate that even after a fusion of the four-qubit quantum computer invented by Li’s team and the SAIL’s neuronal technology, would together, only be capable of analyzing certain forms of information found in Big Data.

The diversity of information that exists in Big Data is incredibly diverse, and analyzing imagery is only one specific skill-set that can be utilized in the effort to produce predictive reports based on intelligence derived from Big Data. Quantum and neuronal computer designs may someday be capable of performing on or above the same performance levels as is routinely found among human analysts; but, human analysts would continue to be a necessary component in the predictive analytics field because the human intellect would still possess broader capabilities than future models of quantum computer technology.

The research question is able to be answered based on the findings reached from the operationalization of the variables. Quantum computer technology could someday allow computer technology to perform machine learning and problem-solving skills, and such abilities would allow quantum computer technology to independently analyze Big Data in order to produce predictive analytics reports; likewise, such computer advances would need to be combined with the skills of human analysts in order to produce predictive reports that comprehensively include analysis derived from a broad range of information.

Quantum computer technology will most likely not be able to replace human beings in the field of predictive analytics, but the further development of quantum computers will allow cyber-technology to evolve from mere tools used by human analysts, to artificially intelligent analysts that would work alongside human beings in the intelligence field of predictive analysts.

Conclusions:

The research results suggest that quantum computers will allow for Moore’s Law to continue being correct, and computer technology would incrementally advance at a steady rate. Future models of quantum computers could act in an unsupervised or semi-autonomous manner because of AI level functionality, such as machine learning and problem-solving skills.

Certain aspects of the field of predictive analytics will possibly be handled almost exclusively by quantum computer technology because of comparative advantages that such future artificial intelligence would most likely be capable of exercising. Specialized quantum computer technology would most likely be used to produce sophisticated and high-quality analysis, but human analysts would be irreplaceable during the subsequent steps of the intelligence process.

Human analysts would be needed to further analyze quantum computer outputs that concern answers found in Big Data; analytical reports that human analysts produce would most likely be combined with the reports that were produced by the specialized quantum computer technology, and comprehensive predictive reports would then be made available to the intelligence consumers.

Human analytic abilities being someday combined with future specialized quantum computer technology would most likely be a profoundly beneficial step forward in the evolution of the intelligence field known as predictive analytics; ultimately, the research finds that quantum computer technology could be combined with other computer science breakthroughs, such as neuronal computational networking, and such cyber-technology would most likely allow for artificially intelligent computers to solve problems and learn while aiding humans in the overall predictive analytics process.

Some experts in the field of artificial intelligence have put forward claims that the human brain is far too complex and sophisticated to be matched in cognitive functions by computers; such sceptics of the potential of artificial intelligence have argued that computer technology can essentially play the role of little to nothing more than a tool that human analysts can utilize in the field of predictive analytics.

A school of thought among experts that work in fields related to artificial intelligence has argued that computer technology can be invented or has recently been invented that can match the analytical skills of human beings; such experts believe that artificial intelligence is possible and that such machines can be given the responsibility of offering answers based on the computer-powered analysis.

The debate over the potential of artificial intelligence continues, but some scientists have argued that the rise of quantum computer technology may offer the metaphorical key to developing artificial intelligence that could match or exceed the performance of human analysts; such capabilities may someday exist because of the problem solving and machine learning skills that quantum computing technology may eventually enable computers to exercise. The research set out to answer the following research question: in what ways can artificial intelligence based predictive analytics be advanced by quantum computing?

In the spirit of finding answers to the research question, a methodology was devised which entailed testing the following hypothesis: despite computer-based predictive analytics technology currently being only capable of producing reports that are of a quality which lack the analytical problem-solving capacities of human analysts, the development and subsequent applications of quantum computing technology will bring computer-driven predictive analytics to at least the same level of analytical problem-solving capabilities as is routinely offered by human analysts.

Case studies were selected in order to operationalize independent and dependent variables. Control group case studies examined advanced classical computer technology. The research’s test groups examined computer problem-solving skills, produced probability outputs, and machine learning capabilities of quantum computer technology that can be used in the field of predictive analytics.

Qualitative and quantitative measurements were considered in order to fulfil the operationalization of the research’s variables. The independent variable selected in the research was quantum computer technology that is being applied or is believed possible to be applied to AI designs that could be used in the field of predictive analytics.

The dependent variables selected in the research were computer-based problem-solving capabilities, probability outputs, and machine learning capabilities that can be applied to the field of predictive analytics. The criteria for understanding the research’s findings after the operationalization of the variables via analysis of the selected case studies were set forth in the following way: if the independent variable significantly impacts the dependent variables examined in the research, then the hypothesis will be found as supported.

However, if the impact of the independent variable upon the dependent variables is found to be insignificant/negligible, then the hypothesis used in the proposed research will be found to be unsupported. The results that were found after the operationalization of the research’s variables show that the hypothesis was moderately supported.

The research offers several suggestions for important future research that relates to the subject matter that this research has examined. Future research should be carried out in regard to how predictive products such as Watson Analytics can be improved upon by the incorporation of quantum computer technology; quantum computer technology is currently available in different design models, and uncertainties in regard to the exact D-Wave’s benefits to the field should be addressed in future research.

Future research in regard to the benefits of incorporating the D-Wave design into the field of predictive analytics could possibly attempt to find answers by approaching the usefulness of integrating technology that is specifically intended to solve optimization problems to work in conjunction with the established cyber-technology that is currently used in the field of predictive analytics.

Greater efforts towards learning how to fully and effectively integrate a quantum computer such as the D-Wave design would be useful; indeed, such research would most likely not only be insightful from the point of view of scholars and intelligence professionals, such future research would also offer the benefit of allowing the D-Wave developers a clearer view of the marketability of the D-Wave as a product in the field of predictive analytics.

Future research may also be directed towards the unanswered question of how exactly any future models based on Zhaokai Li’s four qubit quantum computer design could be integrated to work alongside classical predictive analytical technology such as Watson Analytics. The quantum computer designed by Zhaokai Li’s team may offer significant benefits to the field of predictive analytics.

People that are familiar with the characteristics of social media communication would almost certainly be aware that images and words are oftentimes placed together in order to convey meanings that often times trend over the Internet, and a computer that is capable of image related machine learning would offer possible benefits if incorporated into predictive products such as Watson Analytics.

Future research that could answer the questions that relate to how best to apply quantum computer models would be very helpful to many parties that practice or seek to improve the performance capabilities of predictive analytics cyber-technology.

Future research may also wish to focus on understanding how newly emerging classical computer technology such as the program known as Eugene could be designed to work alongside quantum technology in the form of integrated computer networks that combine some quantum computer technology with newly emerging classical computers with advanced abilities to understand human conversations.

Products which are currently being marketed for use in predictive analytics may be improved by the application of programs that are designed to understand human conversation; many conversations take place over the Internet such as communications that happen every day via social media, and such data is often available to be analyzed inside a sea of Big Data, therefore, products such as Watson may be improved if programs such as Eugene are integrated into that product.

Quantum computer technology that may be used in the predictive analytics field would need to be integrated alongside such technology, and future research should examine how such integration could be best achieved.

Research that is conducted as in order to follow up on this research may want to focus on neuronal inspired computer designs as a possible next step in the evolution of computer technology, and how such neuronal designs could someday be invented to work as computer clusters that include quantum computing technology.

Future research could also examine whether or not neuronal inspired classical computer technology could be invented to work inside the quantum computer as a single model. The question of how the two emerging cyber-technologies could be somehow combined in order to bring about a synergy of the efficiency of artificial neurons and quantum speed-up would be a very worthy course of future research.

Ultimately, incredible discoveries may be the result of future research that endeavours to further understand the true workings of the human brain, and any quantum manipulations that the brain may be utilized in order to bring about cognitive functions.

Artificially intelligent quantum computers that someday exercise the problem solving and machine learning skills needed to produce predictive analytics reports in a manner that is unsupervised by human beings may possibly only be invented if researchers first learn how the human brain’s functions can be applied to computer technology.

nv-author-image

Era Innovator

Era Innovator is a growing Technical Information Provider and a Web and App development company in India that offers clients ceaseless experience. Here you can find all the latest Tech related content which will help you in your daily needs.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.