DISCUSSION
Our results constitute the first demonstration of unidirectional imaging. This framework uses structured materials formed by phase-only diffractive layers optimized through deep learning and does not rely on nonreciprocal components, nonlinear materials, or an external magnetic field bias. Because of the use of isotropic diffractive materials, the operation of our unidirectional imager is insensitive to the polarization of the input light, also preserving the input polarization state at the output. As we reported earlier in Results (Fig. 5), the presented diffractive unidirectional imagers maintain unidirectional imaging functionality under broadband illumination, over a large spectral band that covers, e.g., 0.85 × λ to 1.15 × λ, despite the fact that they were only trained using monochromatic illumination at λ. This broadband imaging performance was further enhanced, covering even larger input bandwidths, by training the diffractive layers of the unidirectional imager using a set of illumination wavelengths randomly sampled from the desired spectral band of operation as illustrated in fig. S7.
By examining the diffractive unidirectional imager design and the analyses shown in Fig. 2 and fig. S1, one can gain more insights into its operation principles from the perspective of the spatial distribution of the propagating optical fields within the diffractive imager volume. The diffractive layers L1 to L3 shown in Fig. 2C exhibit densely packed phase islands, similar to microlens arrays that communicate between successive layers. Conversely, the diffractive layers L4 and L5 have rapid phase modulation patterns, resulting in high spatial frequency modulation and scattering of light. Consequently, the propagation of light through these diffractive layers in different sequences leads to the modulation of light in an asymmetric manner (A → B versus B → A). To gain more insights into this, we calculated the spatial distributions of the optical fields within the diffractive imager volume in fig. S1 (C and D) for a sample object. We observe that, in the forward direction (A → B), the diffractive layers arranged with the order of L1 to L5 ensured that these optical fields propagated forward through the focusing by the microlens-like phase islands located in the diffractive layers L1 to L3, and as a result, the majority of the input power was maintained within the diffractive volume, creating a power efficient image of the input object at the output FOV. However, for the backward operation (B → A) where the diffractive layers are arranged in the reversed order (L5 to L1), the optical fields in the diffractive volume are initially modulated by the high spatial frequency phase patterns of the diffractive layers (i.e., L5 and L4), and during the early stages of the propagation within the diffractive volume, this leads to a large amount of radiation being channeled to the outer space aside the diffractive volume, in the form of unbound modes (see the green shaded areas in fig. S1, A and B). For the remaining spatial modes that managed to stay within the diffractive volume (propagating from B to A), they were guided by the subsequent diffractive layers (i.e., L3 to L1) to remain outside the output FOV (i.e., ending up within the orange shaded areas in fig. S1B).
One should note that the intensity distributions formed by these modes that lie outside the output FOV can be potentially measured by using, for example, side cameras that capture some of these scrambled modes. Such side cameras, however, cannot directly lead to meaningful, interpretable images of the input objects, as also illustrated in fig. S1. With the precise knowledge of the diffractive layers and their phase profiles and positions, one could potentially train a reconstruction digital neural network to make use of such side-scattered fields to recover the images of the input objects in the reverse direction of the unidirectional imaging system. This “attack” to digitally recover the lost image of the input object through side cameras and learning-based digital image reconstruction methods would not only require precise knowledge of the fabricated diffractive imager but can also be mitigated by surrounding the diffractive layers and the regions that lie outside the image FOV (orange regions in fig. 1, A and B) with absorbing layers/coatings that would protect the unidirectional imager against “hackers,” blocking the measurement of the scattered fields, except the output image aperture. Such absorbing layers also break the time-reversal symmetry of the imaging system, which help mitigate the risk of deciphering and decoding the original input in the backward direction.
Throughout this manuscript, we presented diffractive unidirectional imagers with input and output FOVs that have 28 by 28 pixels, and these designs were based on transmissive diffractive layers, each containing ≤200 by 200 trainable phase-only features. To further enhance the unidirectional imaging performance of these diffractive designs, one strategy would be to create deeper architectures with more diffractive layers, also increasing the total number (N) of trainable features. In general, deeper diffractive architectures present advantages in terms of their learning speed, output power efficiency, transformation accuracy, and spectral multiplexing capability (39, 44, 47, 48). Suppose an increase in the space-bandwidth product (SBP) of the input FOV A (SBPA) and the output FOV B (SBPB) of the unidirectional imager is desired, for example, due to a larger input FOV and/or an improved resolution demand; in that case, this will necessitate an increase in N proportional to SBPA × SBPB, demanding larger degrees of freedom in the diffractive unidirectional imager to maintain the asymmetric optical mode processing over a larger number of input and output pixels. Similarly, the inclusion of additional diffractive layers and features to be jointly optimized would also be beneficial for processing more complex input spectra through diffractive unidirectional imagers. In addition to the wavelength-multiplexed unidirectional imager reported in Figs. 8 to 10, an enhanced spectral processing capability through a deeper diffractive architecture may permit unidirectional imaging with, e.g., a continuum of wavelengths or a set of discrete wavelength across a desired spectral band. Furthermore, by properly adjusting the diffractive layers and the learnable phase features on each layer, our designs can be adapted to input and output FOVs that have different numbers and/or sizes of pixels, enabling the design of unidirectional imagers with a desired magnification or demagnification factor.
Although the presented diffractive unidirectional imagers are based on spatially coherent illumination, they can also be extended to spatially incoherent input fields by following the same design principles and deep learning–based optimization methods presented in this work. Spatially incoherent input radiation can be processed using phase-only diffractive layers optimized through the same loss functions that we used to design unidirectional imagers reported in our Results. For example, each point of the wavefront of an incoherent field can be decomposed, point by point, into a spherical secondary wave, which coherently propagates through the diffractive phase-only layers; the output intensity pattern will be the superposition of the individual intensity patterns generated by all the secondary waves originating from the input plane, forming the incoherent output image. However, the simulation of the propagation of each incoherent field through the diffractive layers requires a considerably increased number of wave propagation steps compared to the spatially coherent input fields, and as a result, the training of spatially incoherent diffractive imagers would take longer.
where L (·) refers to the loss function defined in Eq. 9. During the training of this model, the weight coefficients αImgBlk, αEffBst, and αEffSqz were empirically set as 0.0001, 0.001, and 0.001, respectively.
For quantifying the imaging performance of the presented diffractive imager designs, the reported values of the output MSE, output PCC, and output diffraction efficiency were directly taken from the calculated results of L ImgMSE, PCC, and η, respectively, revealing the averaged values across the blind testing image dataset. When calculating the power distributions of different optical modes within the diffractive volume, the power percentage of the output FOV modes takes the same value as η, and the power percentage outside the output FOV is computed by subtracting the total power integrated within the output image FOV from the total power integrated across the entire output plane. The power in the absorbed modes is calculated by summing up the power loss before and after the optical field modulation by each diffractive layer. After excluding the power of the above modes from the total input power, the remaining part is calculated as the power of the unbound modes.
Training details of the diffractive unidirectional imagers
For the numerical models used here, the smallest sampling period for simulating the complex optical fields is set to be identical to the lateral size of the diffractive features, i.e., ~0.53λ for λ = 0.75 mm. The input/output FOVs of these models (i.e., FOV A and B) share the same size of 44.8 by 44.8 mm2 (i.e., ~59.7λ × 59.7λ) and are discretized into 28 by 28 pixels, where an individual pixel corresponds to a size of 1.6 mm (i.e., ~2.13λ), indicating a four-by-four binning performed on the simulated optical fields.
For the diffractive model used for the experimental validation of unidirectional imaging, the sampling period of the optical fields and the lateral size of the diffractive features are chosen as 0.24 and 0.48 mm, respectively (i.e., 0.32λ and 0.64λ). This also results in a two-by-two binning in the sampling space where an individual feature on the diffractive layers corresponds to four sampling space pixels that share the same dielectric material thickness value. The input and output FOVs of this model (i.e., FOV A and B) share the same size of 36 by 36 mm2 (i.e., 48λ × 48λ) and are sampled into arrays of 15 by 15 pixels, where an individual pixel has a size of 2.4 mm (i.e., 3.2λ), indicating that a 10-by-10 binning is performed at the input/output fields in the numerical simulation.
During the training process of our diffractive models, an image augmentation strategy was also adopted to enhance their generalization capabilities. We implemented random translation, random up-to-down, and random left-to-right flipping of the input images using the transforms.RandomAffine function built-in PyTorch. The translation amount was uniformly sampled within a range of [−10, 10] and [−5, 5] pixels in the diffractive unidirectional imager models used for numerical analysis and the model used for the experimental validation, respectively. The flipping operation is set to be performed at a probability of 0.5.
All the diffractive imager models used in this work were trained using PyTorch (v1.11.0, Meta Platforms Inc.). We selected AdamW optimizer (51, 52), and its parameters were taken as the default values and kept identical in each model. The batch size was set as 32. The learning rate, starting from an initial value of 0.03, was set to decay at a rate of 0.5 every 10 epochs, respectively. The training of the diffractive models was performed with 50 epochs. For the training of our diffractive models, we used a workstation with a GeForce GTX 1080Ti graphical processing unit (Nvidia Inc.) and Core i7-8700 central processing unit (Intel Inc.) and 64 GB of RAM, running Windows 10 operating system (Microsoft Inc.). The typical time required for training a diffractive unidirectional imager is ~3 hours.
Vaccination of the diffractive unidirectional imager against experimental misalignments
During the training of the diffractive unidirectional imager design for experimental validation, possible inaccuracies imposed by the fabrication and/or mechanical assembly processes were taken into account in our numerical model by treating them as random 3D displacements (D) applied to the diffractive layers (53). D can be written as