Are We Any Closer To Quantum Computing?

quantum computer imagined

In 1981, American physicist and Nobel Laureate Richard Feynman gave a lecture at the Massachusetts Institute of Technology (MIT) in which he outlined a revolutionary idea. Feynman suggested that the strange physics of quantum mechanics could be used to perform calculations. Quantum computing was born. The illustration here shows what one might have imagined it to be back in 1981 - a lind of science-fiction computer.

Quantum computing is a revolutionary area of computing that uses the principles of quantum mechanics to process information in fundamentally different ways than classical computers. In classical computing, information is processed using bits, which are binary and can represent either a 0 or a 1. In quantum computing, however, the fundamental units of information are called qubits. Qubits can exist in a state of 0, 1, or both simultaneously, thanks to a quantum property called superposition. This allows quantum computers to perform multiple calculations at once.

I am not a physicist or computer engineer, so I don't want to go too deeply into that realm. Reading about this, I see the word "entanglement" and have some memory of Einstein referring to quantum entanglement as "spooky action at a distance." He was skeptical since it seemed to defy the principles of classical physics and his theory of relativity. Einstein doubted entanglement, but modern experiments have confirmed its existence and shown that it is a fundamental aspect of quantum mechanics. In quantum computing, entanglement creates strong correlations between qubits, even when they are far apart.

Entanglement enables quantum computers to solve certain types of complex problems much faster than classical computers by leveraging these interconnected qubits. Quantum computers are particularly well-suited to tasks involving massive datasets, optimization problems, simulations, and cryptography. However, they are still in their early stages of development and face challenges such as error rates, stability, and scalability.

In the same way that AI is already in your daily life - even if you don't notice or acknowledge it - quantum computing could be used in everyday activities. It could revolutionize drug discovery and personalized medicine by simulating molecular interactions at an unprecedented speed, leading to faster development of cures and treatments. By solving complex optimization and learning problems, quantum computers could significantly enhance AI's capabilities, leading to smarter assistants and systems.

Cryptography and cybersecurity's current encryption methods could be broken by quantum computers, but they could also enable quantum-safe encryption, making online transactions and communications more secure. There's good and bad in almost every discovery.

In logistics, smarter traffic systems to more efficient delivery routes, quantum computing could optimize logistics, reducing fuel consumption, travel times, and costs.

And quantum computing could impact improved energy solutions, financial modeling, material design, and many things we haven't even considered yet.

Of course, there are challenges. Qubits are highly sensitive to their environment. Even minor disturbances like temperature fluctuations, vibrations, or electromagnetic interference can cause qubits to lose their quantum state—a phenomenon called decoherence. Maintaining stability long enough to perform calculations is a key challenge. Many quantum computers require extremely low temperatures (close to absolute zero) to operate, as qubits need highly controlled environments. Building and maintaining these cryogenic systems is both expensive and challenging.

Small-scale quantum computers exist, but scaling up to thousands or millions of qubits is a monumental task and requires massive infrastructure, advanced error correction mechanisms, and custom hardware, making them cost-prohibitive for widespread adoption.

On the education side of this, quantum computing sits at the intersection of physics, engineering, computer science, and more. A lack of cross-disciplinary expertise will slow down progress in this field.

Linking to the Wayback Machine

Google Search has integrated a feature that links directly to the Wayback Machine, allowing users to access archived versions of webpages through search results.

The Wayback Machine is an online archive created by the Internet Archive, a non-profit organization. It allows users to access and view historical snapshots of web pages, dating back to the late 1990s. Essentially, it's like a digital time machine that lets you see how websites looked in the past. This can be useful for research, preserving digital history, or just satisfying curiosity.

By clicking the three dots next to a search result and selecting "More About This Page," users can view how a webpage appeared at different points in time. The collaboration enhances public access to web history, ensuring that digital records remain available for future generations.

logo

Source  https://blog.archive.org/2024/09/11/new-feature-alert-access-archived-webpages-directly-through-google-search/

Social Media Attribution

social media screen

When I first started consulting on social media in 2005, I was introducing blogs, wikis, podcasts and the newly -emerging social networks such as Facebook. Both with my academic colleagues and with clients, one of the persistent questions was "How do I know I'm getting any benefit from these social tools?"

Seeing the impact of your social marketing relies on attribution, which is similar to the older metric of ROI (return on investment). Both are sometimes difficult to quantify.

As someone who taught writing for many years, when I first heard the term attribution I thought of giving credit to the original source of information, ideas, images, or language used in a piece of writing. Attribution in writing is important because it shows respect for the work of others, helps to prevent plagiarism and those sources often provide additional information. (see my attribution at the end of this post)

That ROI (return on investment) is a much older dollars-and-cents measurement used well before the Internet and social media For example, you invested $1000 for an advertisement and it produced $5000 in sales. (Some might call that ROAS - Return on Ad Spend - but I'm being simpler here.) Or perhaps, you spent a $1000 on an ad and saw no increase in sales.

Attribution in the social media sense assigns value to the channels that drive an outcome. That might mean dollars but it coukd also be a measurement of a purchase, web visit, download, or subscribing to the site or a newsletter.

It is a bit of reverse engineering or backward design in that you are looking at the effect and trying to determine the cause.

My own tracking of the referring sites for posts on this site allows me to see if traffic to a post came from LinkedIn, Facebook, Twitter, one of my blogs or just a search engine. When someone finds me via Google, I can see what search terms they used. Those results can be surprising. I might get a surge of traffic from a search that found the mention of "Erik Satie" or "flat web design" or "social media attribution."

I have little control about search engine attributions, but I can control what I post on social media and how I word the posts.

touchpoints

Attribution is generally broken down as being in three modes:
Last-touch,
First-touch
Multi-touch attribution.
(Take a look at this diagram from digitalthought.me about more on multi-touch models called Even, Time Decay, Weighted, Algorithmic, etc.)

The first-touch attribution credits the first marketing touchpoint. For example, you run an ad and monitor how many contacts came from that ad.

 

Last-touch attribution credits the channel that a lead went through just before converting. Maybe you ran an ad on Facebook which someone later tweeted and the lead came from the Tweet that linked to your site for a purchase, so Twitter gets the attribution.

Last-touch is easier to measure, but both single-touch models fail to show the complete and sometimes circuitous customer journey. That's why multi-touch attribution is used. This gets much more complicated and more difficult to track. More complicated than the scope of this post. But as an example, the time decay attribution gives more weight to touchpoints closer to the final conversion event. If your original ad is the starting point but the final purchase came after a tweet that was retweeted and then posted as a link in someone's blog a week later, the blog gets more credit (as a personal endorsement) than the ad although obviously none of this would have happened without the ad.

Back to that question I started getting in 2005. It is important to remind clients that social media used for marketing and as engagement and brand-building may not always generate leads or sales directly but rather indirectly. Getting visitors to your site alone is a kind of success. It may not lead to sales (ROI) immediately, but it increases awareness of your brand for the future.

I will crosspost this on my business blog, Ronkowitz LLC, and measure which post gets the best results.

Attribution is more complicated than this primer, so you might want to check out these sources:

Applying Technology Laws

Huang's Law  and Moore's Law are technology "laws." Maybe it is more accurate to say they are observations, but "law" has become attached to these observations since they appear to remain true.

Moore's law is the observation that the number of transistors in an integrated circuit (IC) doubles about every two years. Moore's law is an observation and projection of a historical trend. Rather than a law of physics, it is an empirical relationship linked to gains from experience in production.

Gordon Moore, the co-founder of Fairchild Semiconductor and Intel (and former CEO of the latter), posited in 1965 posited the idea and projected this rate of growth would continue for at least another decade. In 1975, looking forward to the next decade, he revised the forecast to doubling every two years. His prediction has held since 1975 and has since become known as a "law".

Moore's prediction has been used in the semiconductor industry to guide long-term planning and to set targets for research and development, thus functioning to some extent as a self-fulfilling prophecy.

Huang’s Law has been called the new Moore’s Law. It seems that the law that the same dollar buys twice the computing power every 18 months is no longer true.

Huang's law is an observation in computer science and engineering that advancements in graphics processing units (GPUs) are growing at a rate much faster than with traditional central processing units (CPUs). The observation is in contrast to Moore's law as Huang's law states that the performance of GPUs will more than double every two years.

Jensen Huang was then CEO of Nvidia and at the 2018 GPU Technology Conference and observed that Nvidia’s GPUs were "25 times faster than five years ago" whereas Moore's law would have expected only a ten-fold increase. As microchip components became smaller, it became harder for chip advancement to meet the speed of Moore's law.

tech in oppositionHuang's Law and Moore's Law are concepts primarily associated with the semiconductor industry and technology advancements. However, their principles can be extended and applied to various domains beyond technology.

You can extend Huang's Law to other fields where exponential growth or improvement is observed. For example, consider advancements in renewable energy efficiency, healthcare outcomes, or educational achievements. The idea is to identify areas where progress follows an exponential curve and apply the principles accordingly.

Both laws highlight the concept of scaling - either in computational power (Moore's Law) or AI efficiency (Huang's Law). You could apply this principle to other systems and processes where scaling can lead to significant improvements.

I am imagining a discussion (probably in a classroom setting) about ethical considerations, such as the impact of rapid advancements on society, and focus on responsible and ethical development in various fields. That certainly is true currently in discussions of AI.