Technology Landscape Demands a Leap in Fiber Optics Functionality
The information medium of the 20th century was revolutionized twice: first with the invention of the integrated circuit, boosting data processing, and second with the demonstration of low-loss fiber optics, bringing up-to-speed data communication. Those two components lay in the basis of the Internet in the form known to us: a giant network of data-processing nodes interconnected by high-throughput communication links.
Similarly, in the 21st century, we again find ourselves on the verge of a transformative leap in data processing and distribution hardware. Increasing demand for computational power has driven the emergence of a whole plethora of new hardware platforms. From spintronic and photonic to neuromorphic and quantum – novel computational schemes operate on an increasingly demanding power budget, differ from each other in terms of protocols used for data processing, and even use physically disparate forms and degrees of freedom in which the data itself is encoded.
Yet, fiber optics – the workhorse of digital communication - still awaits the transformation that would enable it to interconnect those diverse platforms into one harmonized network. The challenge is not only in the growing energy demands in computing across diverse data processing platforms requiring even more efficient long-haul communication links than the state-of-the-art fiber optics but also in the fact that the fiber of tomorrow will need to translate the data across those platforms. Imparting efficient data transduction and transformation capabilities into fiber optics is likely to require integrating photonic and optoelectronic devices and systems into the fiber itself.
FAMES Lab at Indiana University Bloomington develops a set of material processing techniques that will help the fiber optics undergo this desired transformation. Drawing inspiration from Very Large-Scale Integration (VLSI), which in the 1970s resulted in the emergence of the computer microprocessor as we know it, we dubbed it “VLSI for Fibers” (or “VLSI-Fi”). VLSI-Fi harnesses melt-processing of multimaterial fiber preforms to materialize arbitrarily complex architectures typical of integrated circuitry in fibers. Our vision is to substantiate the long-haul network interconnects meeting the demand of the Internet of Tomorrow by integrating the emerging computation platforms.
How Do We Functionalize the Fiber?
At FAMES Lab, we aim to develop a material processing toolbox, which, when combined in a proper sequence, would result in pre-engineered solid-state systems embedded in a fiber, ramping up the discussion to the circuit design level and hiding the basic material science completely “under the hood.”
Is expectation for such an advance even realistic, one might ask?
Well, at least one example is in front of our eyes: VLSI, used for building microprocessors, formulates a set of simple rules, such as the width of electrical leads and gaps in between them, for the circuit to function properly, allowing one to think about microprocessors in terms of circuit board design rather than the physics of semiconductors.
Yet fiber fabrication has an additional significant challenge compared to microchip fabrication. While a silicon wafer can be structured all the way to the final product in a solid state by a sequence of processing steps combining photolithography with chemical/thermal treatment, the standard fiber-optics manufacturing, such as thermal draw, relies on pulling the fiber from a melt. Melt-shaping of glass and materials it encapsulated is prone to complex, hard-to-control fluid dynamics. Nonlinear and, at times, even chaotic fluidic behavior governed by spontaneous processes, such as capillary instability, nevertheless needs to result in ordered solid-state architectures to achieve the technological goal of VLSI-Fi – the user-prescribed in-fiber self-assembly of integrated devices and systems by design, demonstrated in Figure 1 – in contrast to generally uncontrollable outcomes.
Let the Order Emerge out of Chaos!
Exploring the capillary breakup in multimaterial fibers fed through a localized liquefaction zone, such as a miniature hydrogen-oxygen flame or a powerful laser spot, we've noticed that this process, closely resembling a dripping faucet - a standard demonstrator of chaos - was at times becoming deterministic and predictable. In such a predictable breakup, continuous and separate semiconducting cores of a silica fiber could be broken up into monodispersed arrays of spheres, pinching off the cores in phase with each other at the same axial location along the fiber. We anticipated that understanding the physics behind this unusual behavior would help us control the in-fiber architecture by design rather than exploration, which is, in fact, the “holy grail” of the VLSI-Fi.
Though I had some intuition about this predictable type of breakup a few years back and even described its behavior pretty accurately mathematically, my understanding of this process was more of a gut feeling based on watching the behavior of the system in a large number of experiments, and the description of its behavior that I had in mind was based in merely geometrical arguments. It took some time to think through the physics and eventually formulate the description of the process in terms of basic conservation laws, such as those of energy and momentum, or in terms of a balance of forces. When finally such a model was in place, this was like breaking a hole in the wall that obscures you from seeing the world behind it: suddenly, results of experiments from years ago that I had no explanation for made sense and fell in place, forming a big, harmonious picture. It was kind of an "Aha!!!" experience, the chase of which, without exaggeration, is the strongest motivator for me, as someone fascinated by the beauty of science, to stick to this lifelong journey.
In our paper, for the first time, we present the physical intuition behind the predictable kind of breakup and formulate a model, providing a set of design rules for material processing, yielding the desired in-fiber architectures. With my team at FAMES Lab and our collaborators from Applied Mathematics and Mechanical Engineering at MIT, we tested the model and its validity limitations both experimentally and from ab-initio.
We call it the Axial Viscosity Gradient Instability Model (AVG-IM). Given the temperature profile of the liquefaction zone, the feed speed of the fiber through that zone, the fiber dimensions, and material composition, AVG-IM defines the conditions in which the breakup will be predictable, allowing us to design the breakup period and location, and not less importantly, suggest technological recipes for integrated circuits manufacturing in multilateral fibers, using AVG-IM as a design framework.
In the conditions where AVG-IM is valid, the breakup yields a structure that looks like a perfectionist's dream: you feed the fiber through the liquefaction zone, where the magic happens, and you get out a perfectly ordered structure of discrete fiber-embedded architectural entities, like in Figure 2.
AVG-IM is valuable not only technologically. Looking broader, it enables an experimental decomposition of a continuous spectrum of all the possible capillary instabilities in viscous co-flowing fluids into individual predictable components. Thus, just as quantum simulators are used to calculate molecular-level dynamics of many-body systems, by capturing the evolution of the Hamiltonian equivalent to that of the simulated process, AVG-IM potentially positions the multimaterial fiber as a generalized physical solver of computationally heavy Navier-Stokes fluid dynamics equations.