FDTD method simulates photonic crystals


It would be impossible to design and simulate microprocessors without very powerful CAD programs to model microelectronic circuits and structures. Likewise the photonics industry must find a means to integrate more than one optical device onto a single chip. And the industry is now at the point where true integrated optical circuits-albeit with only a few components on a given substrate-are no longer relegated to theoretical speculation.

The beam propagation method (BPM), one of the most widely used computational techniques, produces fast and accurate modeling of photonic devices by taking advantage of certain simplifications applicable to many designs. But the approximations introduced in the BPM produce inaccurate results when applied to structures that violate the restrictions on the use of the BPM (see "BPM: The efficient modeling tool," p. 34).

Ring resonators and photonic bandgap structures are examples of such devices (see Fig. 1). Photonic bandgap crystals (sometimes referred to as photonic crystals) are promising means for achieving certain critically important devices such as short radius bends in waveguide interconnects, and one of the best ways to simulate them accurately is through the use of the finite-difference time-domain (FDTD) method (see "Photonic crystals guide the light," p. 36).

The FDTD method of simulating the propagation of electromagnetic fields is well established. It saw wide use in the microwave industry, and has been extensively validated for the last few decades. And while it is computationally demanding and numerically intensive, the ever-increasing computational power of personal computers has allowed it to become one of the most widely used modeling techniques for new photonic applications.

The FDTD method is a rigorous solution to Maxwell's equations, without any theoretical restrictions or approximations. The method solves Maxwell's curl equations as opposed to, for example, a solution of the monochromatic wave equation.

Consider a region of space where there are no currents flowing and no isolated charges. Maxwell's curl equations in Cartesian coordinates can be written simply as six scalar equations. Here are two examples:

The other four are symmetric equivalents of these obtained by exchanging the x, y, and z subscripts and derivatives.

In all cases the temporal change in the E field is dependent upon the spatial variation of the H field, and vice versa. In the FDTD method Maxwell's equations are modified to the form of central-difference equations, made discrete and implemented numerically in software.

The most common method, based on Yee's mesh,1 computes the E and H field components at points on a grid spaced Δx, Δy, Δz apart. However the E and the H field components are interlaced in all three spatial dimensions (see Fig. 2). Furthermore, breaking time up into discrete steps of Δt, the E fields are computed at times t = nΔt and the H fields at times t = (n+1/2)Δt, where n is an integer representing the compute step. For example, the E field at a time t = nΔt is equal to the E field at t = (n-1)Δt plus an additional term computed from the spatial variation (curl) of the H field at time.

The most common method, based on Yee's mesh,1 computes the E and H field components at points on a grid spaced Δx, Δy, Δz apart. However the E and the H field components are interlaced in all three spatial dimensions (see Fig. 2). Furthermore, breaking time up into discrete steps of Δt, the E fields are computed at times t = nΔt and the H fields at times t = (n+1/2)Δt, where n is an integer representing the compute step. For example, the E field at a time t = nΔt is equal to the E field at t = (n-1)Δt plus an additional term computed from the spatial variation (curl) of the H field at time.

This method results in six equations that can be used to compute the field at a given mesh point, denoted by integers i, j, k. For example, two of the six are:

These equations are iteratively solved in a leapfrog manner, alternating between computing the E and H fields at subsequent Δt/2 intervals.

The choice of the computational domain (the size of the grid), the grid spacing (Δx, Δy, Δz) and the time step in FDTD are critically important to produce accurate simulation of any photonic structure. Typically, the grid spacing must be considerably less than the wavelength-frequently less than λ/10-and of course less than the smallest detail in the layout. To get a stable algorithm, one must adhere to the following condition relating the spatial and temporal step size:

where c is the velocity of light.

The boundary conditions at the spatial edges of the computational domain must be carefully considered. Many simulations employ an absorbing boundary condition that eliminates any outward propagating energy that impinges on the domain boundaries. One of the most effective is the perfectly matched layer (PML),2 in which both electric and magnetic conductivities are introduced in such a way that the wave impedance remains constant, absorbing the energy without inducing reflections.

Periodic boundary conditions (PBC) are also important because of their applicability to PBG structures. There are a number of variations on the PBC, but they all share the same common thread: the boundary condition is chosen such that the simulation is equivalent to an infinite structure composed of the basic computational domain repeated endlessly in all dimensions. PBCs are most often employed when computing the band structure of photonic crystals.

Finally, a simulation begins with all E and H field values set to zero, and an appropriate driving function is introduced at some position within the domain (see Fig. 3).

The computational steps in FDTD are relatively simple, using only addition, subtraction, and multiplication, which aids simulation speed. The technique is extremely versatile because it is inherently full vectorial without limitations on optical effects such as direction of propagation, index contrast, or backward reflections.

The FDTD method is time-tested and stable, and can efficiently handle material dispersion and nonlinearity. And because it is a time-domain technique, it can cover a wide frequency range with a single simulation run. Furthermore, FDTD lends itself to cluster computing-sharing the computational demand for a single problem among several computers on a network-which has allowed researchers to simulate problems otherwise impossible on a single computer.

On the other hand, FDTD is computationally demanding, requiring a fairly dense grid of points at which all three vector components of both the E and H fields must be maintained. A full three-dimensional simulation of some fairly mundane problems like a simple, planar, evanescent splitter can be impossible. Such devices are usually long (20 to 30 mm) and would require a computer with terabytes of memory, notwithstanding an impossibly long simulation time. Fortunately, such devices are handled imminently well by the BPM.

Fabio Pizzuto is an application engineer at RSoft Design Group, 200 Executive Blvd., Ossining, NY 10562. He can be reached at fabio_pizzuto@rsoftdesign.com.

  1. K. Yee, IEEE Transact. Antennas and Prop. AP-14, 302 (1966).
  2. Jean-Pierre Berenger, J. Comput. Phys. 114, 185 (1994).
  3. Allen Taflove, Computational Electrodynamics, The Finite-Difference Time-Domain Method. Artech House Boston-London.
  4. R. L. Espinola et al., Opt. Express 8, 518 (2001), http://www.opticsexpress.org/ oearchive/ source/32765.htm.

BPM: The efficient modeling tool

The BPM in its most common form is an approximate numerical method for solving the scalar wave equation for arbitrary device geometries. Assuming a scalar field for Maxwell's equations and a monochromatic solution yields the well-known Helmholtz equation:

In BPM the field φ is assumed to be the product of a rapidly varying component associated with the phase and a slowly varying envelope such that:

where k is a reference wave number chosen to represent the average phase variation of the field φ.

When such a substitution is made, the Helmholtz equation becomes a function of the slowly varying envelope u, and it contains both first- and second-order derivatives of u with respect to z. In BPM the variation of u with z is assumed to be sufficiently slow that the second-order derivative is negligible compared to the first-order derivative. And dropping the second-order derivative, the Helmholtz equation is now rewritten in the following form:

This is the three-dimensional scalar wave equation in the parabolic (or paraxial) approximation, sometimes referred to as basic BPM. The solution has now been reduced from a second-order boundary-value problem to a first-order initial-value problem that can be solved by a relatively simple integration of the above equation along the propagation direction z.

The biggest advantage of BPM is numerical efficiency since it requires relatively small amounts of memory and computational power. However, there are some restrictions on its use, of which the most fundamental is that the gradient of k along z be small. Hence, solutions are limited to those in which the fields propagate within a narrow range of angles (±5°) about the z axis (paraxiality).

The basic BPM is also not appropriate for problems with wide dynamic range of propagation constants, for example in some high index contrast devices. Furthermore, it is inherently unidirectional so it will not model reflected energy. And finally, devices for which phase variation is critical may not be well served by the basic BPM, but can often be handled by the wide-angle BPM extension. Multimode interference devices (MMIs) are one such example.

A number of extensions have been developed to overcome the specific limitations of BPM.1 The use of Padé approximates to produce wide-angle BPM reduces the paraxial restriction so that fields propagating at angles of up to ±70° can be accurately simulated. Wide-angle BPM also reduces the restrictions on high index contrast and on phase-sensitive devices (see figure). Bidirectional BPM uses a recent technique that considers multiple interfaces and reflections in a self-consistent and efficient way, and in doing so can accurately simulate reflections.

Polarization effects (vectorial BPM) can be included in the BPM by recognizing that the electric field E is a vector, and starting the derivation from the vector wave equation rather than the scalar Helmholtz equation. Anisotropic materials can be dealt with by extending the definition of vectorial BPM to include a dielectric tensor. Nonlinear materials can be accommodated by allowing the refractive index appearing in the equations to be a function of the optical-field intensity.

  1. R. Scarmozzino et al., J. Sel. Topics in Quant. Electron. 6, 150 (2000).

Photonic crystals guide the light

Photonic bandgap structures, also known as photonic crystals, are optical materials with periodic changes in the dielectric constant. They are analogous to the crystal structure inside a semiconductor, but in a photonic crystal the electron bandgap is replaced by the photon bandgap.

Photonic band diagrams can be produced using the FDTD algorithm by implementing periodic boundary conditions.1 In conventional waveguides, such as ridge and channel waveguides or optical fiber, light is guided by index confinement. However, to produce efficient small radius bends, the guide must be fabricated with very high index contrast. Otherwise, the bend will be lossy.

In a photonic crystal a guide is created by removing some of the periodic changes, in essence creating a defect in the crystal structure. The photonic bandgap of this structure defines the resonance frequencies at which energy will not propagate through the bulk of the photonic crystal, and at these frequencies modes will propagate nicely within the linear defect. The design and simulation of a PBG waveguide with two 60° bends operating at λ = 1.55 µm can be understood in this way (see figure). A key element of this design was minimizing reflections from the bends to produce a transmission close to 100%. Such bends can be achieved in an extremely small footprint without appreciable loss.

  1. G. Mur, IEEE Trans. Electromagn. Compat. EMC-23, 377 (1981)
More in Home