FlowSteer: Conditioning Flow Field for Consistent Image Restoration
Tharindu Wickremasinghe1, Chenyang Qi2, Harshana Weligampola1, Zhengzhong Tu3, Stanley H. Chan1
1Purdue University 2HKUST 3Texas A&M University
Figure 1. Left: FlowSteer estimates a clean image xfrom a measurement ygenerated by a degradation operator A. It achieves both
pixel-level fidelity by conditioning on the degradation model, and high perceptual quality utilizing prompt and attention guidance. Right:
FlowSteer demonstrates advantage on colorization, super-resolution and deblurring tasks in both fidelity and perceptual quality.
Abstract
Flow-based text-to-image (T2I) models excel at prompt-
driven image generation, but falter on Image Restoration
(IR), often “drifting away” from being faithful to the mea-
surement. Prior work mitigate this drift with data-specific
flows or task-specific adapters that are computationally
heavy and not scalable across tasks. This raises the ques-
tion “Can’t we efficiently manipulate the existing gener-
ative capabilities of a flow model?” To this end, we in-
troduce FlowSteer (FS), an operator-aware conditioning
scheme that injects measurement priors along the sampling
path,coupling a frozed flow’s implicit guidance with explicit
measurement constraints. Across super-resolution, deblur-
ring, denoising, and colorization, FS improves measure-
ment consistency and identity preservation in a strictly zero-
shot setting—no retrained models, no adapters. We show
how the nature of flow models and their sensitivities to noise
inform the design of such a scheduler. FlowSteer, although
simple, achieves a higher fidelity of reconstructed images,
while leveraging the rich generative priors of flow models.
All data and code will be public here.
1. Introduction
Over the past few years, diffusion models have demon-
strated impressive generative capability [21,52], for many
image editing [2,18,37,56] and restoration tasks [9,25,46,
51]. Diffusion models, however, are known to have compu-
tational issues because one needs to run tens to hundreds of
diffusion steps to obtain the result [12,44,50]. The more re-
cently introduced flow models have shown initial success as
an alternative to diffusion-based Text to Image (T2I) mod-
els, where they can generate images with higher perceptual
quality and aesthetics [1,45,64]. However, flow models
have not been widely adopted in image restoration tasks. If
flow models are consistently faster and produce higher aes-
thetic quality, why are they not used in image restoration?
The biggest challenge of using a flow model for image
restoration is the requirement for image fidelity. In image
editing, the model can be more imaginative because the goal
is not to preserve pixel-level fidelity [38,55,61,63]. In
image restoration, however, the forward image formation
model provides a more restrictive constraint on what im-
ages are allowed to be generated. In theory, if a flow model
is conditioned sufficiently, it is possible to obtain a restored
image b
xthat has sufficient fidelity, such that yAb
x,
1
arXiv:2512.08125v1 [eess.IV] 9 Dec 2025
where Ais the degradation formulation and yis the mea-
surement of degraded image. If this can be achieved, then
we will have both superior generative restoration and be
much faster than the diffusion counterparts.
More recent image editing methods use implicit con-
ditioning methods with embedded features [37,58,60].
First, a degraded input image is inverted into a noisy latent
through flow inversion. At each step, features from under-
lying Diffusion Transformers (DiTs) are cached as layout
guidance [26,27,30]. The inverted latent is then denoised
by a prompt-guided Flux [3,31] model, with selected steps
fusing the cached features back into the trajectory. This ap-
proach only guides the overall layout of the input image,
without any explicit conditioning on the pixel-level fidelity.
Therefore, the output images have high perceptual quality
and visual appeal, but the identity of the subject is lost. On
the other hand, explicit pixel-level conditioning has been at-
tempted through PNP-Flow [36] in every step of the denois-
ing path. As a result, it is prone to excessive blur artifacts,
and cannot achieve the rich colors, sharp textures, and high
perceptual quality achievable with the generative priors.
In this paper, we present a flow-based image restoration
method by making a critical observation: In Flow-based
image restoration, the forward model cannot be uniformly
applied throughout the solution trajectory. Instead, we in-
troduce a scheduler known as Flow-Steer(FS) to control the
amount of forward model conditioning to be added to the
solution trajectory. Specifically, no conditioning should be
introduced in the beginning when the estimate is largely
still in the latent space. Conditioning should be introduced
toward the middle or latter stage of the solution trajec-
tory when the estimate is becoming a meaningful image.
Since Flow-Steer is a scheduler that can be added to any
flow model, it offers a training-free zero-shot upgrade of
the flow-based image restoration at nearly no additional
cost. To demonstrate this, we adapt a flow-based inversion-
reconstruction pipeline. Shared attention features provide
implicit conditioning combined with text prompts, whereas
the Flow Steer schedule offers pixel-level conditioning.
In summary, our main contributions are threefold:
1. We examine and explain the challenges of conditioning
flow models towards pixel-level fidelity.
2. We propose Flow-Steer, a scheduler that aims to mitigate
the challenges by telling the flow model when to include
forward model conditioning.
3. We demonstrate that Flow-Steer is a universal scheme
that can be adopted to a variety of image restoration
tasks, including but not limited to deblurring, denoising,
colorization, and super-resolution.
Algorithm 1 Fidelity update in a diffusion path [62]
1: xN N(0,I)
2: for t=N, . . . , 1do
3: x0|t1
¯αtxt Zθ(xt, t)1¯αtdenoise
4: b
x0|tAy+IAAx0|tfidelity update
5: xt1 Nµt(xt,b
x0|t), σ2
tIproject back
6: end for
7: return x0
2. Related Works
2.1. Image restoration with measurement fidelity
Sampling diffusion paths. Since the introduction of dif-
fusion models [21,39,52,53] for image generation, peo-
ple have quickly realized these models can be used in im-
age restoration. Diffusion-based image restoration mod-
els [7,24,25,62] pioneered operator-aware restorations.
Every sample along the reconstruction path can be condi-
tioned to have fidelity towards a noise-free forward model
y=Ax through a projection step. The core idea is to alter-
nate between 1) a step in the direction of the log-likelihood
xlog p(y|x)for higher fidelity, and 2) a denoising step
or a step towards the log-prior xlog p(x). Algorithm 1
shows how the diffusion schedule of DDIM [52] was modi-
fied for image restoration through a xlog p(y|x)step by
DDNM. [62].
More updates along the sampling path. Various addi-
tional improvements to the sampling path were introduced
since then. DiffPIR [68] wraps a plug-and-play style up-
date [57] inside each step along the diffusion sampling path.
DPS [9] extends the work beyond the linear model, and
relaxes a strict-measurement consistency update. This al-
lows non-linear and noisy inverse problems to be solved
with diffusion priors. Recently, DPIR [29] shows that us-
ing the both text prompts and the degraded image improves
the reconstruction quality. RDMD [59] suggests a linearly
weighted sum between a stochastic generative sample and a
deterministic update can improve the perception-distortion
trade-off. All of the above improvements can be considered
as a “scaffolding” to that of Algorithm1.
2.2. Flow-based image editing and restoration
Flow models. Flow Matching (FM) [32] and Rectified
Flow (RF) [34] reframed the generative modeling as learn-
ing velocity fields/ODE transports—aiming for linear sam-
pling paths, fewer steps, and stronger quality [49]. Recent
DiT/RF-Transformer systems [14,48] scaled these ideas
for Text to Image(T2I), making flow-style models much
faster during inference. (e.g., SD3/DiT [54] and FLUX-
style rectified-flow transformers [3]).
Editing with rectified flows. RF models perform much
2
Figure 2. Image inversion(left) and reconstruction(right) paths of a flow model. A pre-trained flow trajectory vθis conditioned
implicitly by language prompts (C1, C2)and feature sharing between paths. FlowSteer introduces an explicit conditioning
schedule (center) to steer the reconstruction path toward high pixel-level fidelity without retraining the model.
better than their diffusion counterparts in image editing with
T2I conditioning[10,64]. RF-Edit [60] derives second-
order samplers and share features between inversion and
editing to reduce the drift. More work show fast, training-
free inversion and editing in as few as 8 steps, and edit spe-
cific regions of an image [10,11,22,45]. However, these
flow models are not common in image restoration for they
are unable to produce consistent images with high fidelity
to a measurement.
Flow models for restoration. Works such as PnP-
Flow [36], D-Flow [4], and Flow Priors [67] demonstrate
that flow models can be used in several image restoration
tasks. However, these require a flow model to be separately
trained for each dataset. Thus, it is not transferrable to off-
the shelf flow-based T2I models (such as flux [31]), and
hence is not scalable.
In this paper we show that pre-trained flow models can,
in fact, be used for a range of image restoration tasks in a
truly zero-shot manner, without any retraining of the model.
We find that flow models are sensitive to perturbations in
conditioning, and that the conditioning using the forward
model should introduced in middle or later stages of the re-
construction path.
3. Flow Models and Restoration
We review Rectified Flow models and conditioned gener-
ation. We then see how inversion and reconstruction have
been conditioned for image editing and their inherent limi-
tations. Finally, we look at how the data fidelity term could
be explicitly conditioned in an ideal flow model.
3.1. Rectified Flow models
Flow models are trained to map a clean image x0sampled
from a clean distribution π0, to a noise sample of the same
dimensionality, x1π1. For Rectified Flows (RF), in-
termediate samples along the path are linear interpolations
between x0and x1:
xt=tx0+(1t)x1, t [0,1].(1)
The training scheme is to learn a velocity field vθ(.),
which is trained to predict the velocity between the two im-
age distributions by minimizing
min
θZ1
0
Eh(x1x0)vθ(xt, t)2
2idt. (2)
Given a sample of an intermediate image xtand the time
index t(0,1), an ideally trained RF model should output
the velocity that points to the desired clean image. For an
incremental dxt, we can write dxt=vθ(xt, t)dt. Thereby,
we have the Euler equation for solving the first-order ODE,
implemented for a time schedule t=tN, ...t0:
xti1=xti+ (ti1ti)vθ(xti, ti), i {N, . . . , 1}.
(3)
Assuming an ideal model vθthat accurately estimates the
velocity field conditioned on a text prompt Cand timestep
t, Classifier-Free Guidance[20] is further utilized to control
the trajectory’s drift dxt, to align with the target attributes
specified by C.
3.2. Image Inversion and Reconstruction
Given an input measurement ywith degradation, the im-
age editing framework first inverts it to a noise distribution
π1. In each step of inversion, the velocity field vθ(., t;C1)
is conditioned on a description C1of the degraded image.
(See Figure 2, left). Additionally, the attention maps at each
step are cached to be fused with the reconstruction path.
3
Figure 3. Effect of varying classifier-free guidance γand
feature-sharing steps ζon restoration quality. Increasing γ
enhances color saturation but reduces fidelity to the mea-
surement y, while increasing ζimproves pixel-level consis-
tency without recovering true color.
Next, the inverted latent is denoised by sampling
vθ(., t;C2), which is conditioned on a prompt C2that
describes the target image. As illustrated on the right of
Figure 2, the discrete sampling steps formulate a trajec-
tory from noise to clean image, where the cached attention
maps are fused in the corresponding step. While this pro-
cess steers the reconstruction toward a general manifold of
high perceptual quality, the fidelity achieved by this stan-
dard inversion-and-reconstruction approach is insufficient
for precise, high-quality image restoration.
3.3. Limitations of Implicit Conditioning
Fusing attention features from the inversion path into the re-
construction paths has its limitations. Most pre-trained flow
models such as Flux [31] are trained in an encoded feature
space mapped by VAE autoencoders [28,44]. Thus, the
entire flow algorithm runs on this embedding space, rather
than in the pixel space. An attention map is a similarity of
the tokens generated by 2×2patches on these VAE embed-
dings, which is compressed from the pixel space (e.g, Flux
uses 16×). Thus, fusing attention features only preserves
coarse-level details without guidance on low-level texture
within each token. Furthermore, these coarse features are
not decoupled, and we cannot separate out the unwanted ar-
tifacts. Consider the example shown in figure 3. Having
many steps (ζ) that copy features from the inversion path
tends to copy unwanted artifacts, leading the results to have
a black-and-white color palette after reconstruction.
The other parameter used widely is the Classifier-Free-
Guidance (CFG) [20] parameter that controls the generative
hallucinations of the flow model. This parameter γhas to be
tuned together with ζ. Notice that even the best configura-
tion of (γ, ζ), gives an image lacking in pixel-level fidelity.
This motivates the explicit use of measurement yand the
Algorithm 2 Enforcing fidelity into an ideal flow path
1: b
xNπ1=N(0,I)
2: C A colorful image of a tiger...
3: for i=N, . . . , 1do
4: b
xti=b
xti+1 +vθ(ˆ
xti+1 , ti;C)(ti+1 ti)
5: b
x0|ti=1
(1ti)+ϵb
xtiti
(1ti)+ϵη denoise
6: ¯
x0|tiAy+IAAb
x0|tifidelity update
7: ˆ
xti= (1 ti)¯x0|ti+ project back
8: end for
9: return x0
degradation operator Afor an explicit guidance towards the
high fidelity region.
3.4. Explicit Conditioning for Ideal Flows
Assuming a linear forward model y=Ax is known, we
can attempt for pixel level fidelity by adapting the ideas pre-
sented in section 2.1 by explicitly enforcing x0|ti=yas
outlined in algorithm 2.
Suppose that at a time t+ 1 in the reconstruction path,
we have a prediction for xt+1. This is line 4 in algorithm
2. To match with the framework of the null-space update
in algorithm 1, we need to follow the forward model path
defined for vθ(.)to predict ˆx0|t. This is the denoising step
(Step 3) of the algorithm 1, which can be adapted for flow
using the forward model equation 1. Assuming an ideally
behaving flow model, the intermediate xtilies in the lin-
early interpolated position b
xti=tix1+(1ti)b
x0.
Taking x1to be Gaussian noise η N(0,I), we get
b
x0|ti=1
(1ti)+ϵb
xtiti
(1ti)+ϵη. Notice how this can be
interpreted as a denoising step b
x0|tD(b
xt). Next, the
predicted clean image is updated in the null-space, followed
by a re-projection back onto the intermediate step ti.
Note that this analysis is valid if we have access to an
ideal flow model vθ. Even with such a model, we require
the ϵterm in the denominators during implementation for
numerical stability.
4. Steering Rectified Flows in the Wild
4.1. Fragile, non-ideal flow models
Flow models are sensitive to noise. Unlike diffusion mod-
els, flow models are not trained with a σtschedule, and have
no inherent “buffer” for randomness in each step t. How-
ever, the following sources of noise ηare inevitable:
1. No RF model will predict the true velocity in every time
step. Thus vθ(b
xti+1 , ti)=videal θ(b
xti+1 , ti)+η1and
xti=xtiideal + 1
2. The forward and backward operators from Amay not be
fully accurate. For example, for colorization and super-
resolution, the pseudo-inverse operator Ais not exact.
Hence b
xtiAy+IAAb
xti+η2
4
Figure 4. Linear projections of flow models exacerbate the
errors from non-ideal velocities. We recommend to avoid
projection operations to estimate b
x0|tiwhen ti1.
3. RF models are trained in an embedding space through
autoencoders. These embeddings need to be converted
back to a pixel space before the pixel-level fidelity is
conditioned.
The above sources of inherent randomness, added to nu-
merical/precision errors, create a random noise at each step
that we choose to enforce fidelity. Suppose that noise is
characterized by a standard deviation ηeffective. In diffusion
models, the training scheme inherently has a random noise
schedule σt. Therefore, as long as the hyper-parameters en-
sure ηeffective σt, such fidelity updates at every step do
not pose problems. In contrast, since the training scheme
of flow models does not account for such randomness (see
equation 1), even a well-trained flow model with powerful
priors cannot converge to desirable images with such noise-
inducing fidelity updates on each step. This is the reason for
the fragility of using Diffusion-style updates on flow mod-
els, and we posit that this is a key challenge in the research
community to adapt flow models for image restoration.
4.2. Steering the Flow
As we shall see, there are some practical assumptions that
will lead us to a schedule where an explicit update would
not divert the flow unnecessarily. The only adjustment re-
quired is a parameter scheduler {λi}, to achieve the explicit
conditioning that we expected from Algorithm 2. Some
practical decisions that lead to this scheduler are as follows:
Avoid conditioning early on in the reconstruction.
Note that as a consequence of the non-ideal nature of the
flow models, the noise gets scaled by a factor of t, and the
estimation error for b
x0|tiscales by ti. A high-level view of
this is shown in Figure 4.
Avoid projecting to and from b
x0|t. When tiis close to
0, we can approximate the denoise step of algorithm 2to be
b
x0|tib
xti. This avoids linearly projecting to find an ap-
proximate x0|ti. This results in the fidelity update enforcing
A¯
xtyat the steps tithat we decide to condition the flow.
Algorithm 3 FlowSteer: for a Non-ideal Flow model
1: b
zNπ1=N(0,I)
2: C A colorful image of a tiger...
3: for i=N, . . . , 1do
4: ˆ
zti=ˆ
zti+1 +vθ(ˆ
zti+1 , ti;C)(ti+1 ti)
5: b
xti=Decoder(b
zti)
6: if λi>0:
7: b
xtiAy+λiIAAb
xtifidelity update
8: b
zti=Encoder(b
xti)
9: end for
10: return Decoder(z0)
This is a strict constraint to follow on each step, and makes
intuitive sense: y=Ab
xt=Ab
x0when xtx0.
Therefore, we should avoid steps 5and 7in algorithm 2,
and wait until tiis closer to 0. One could also intuitively see
from figure 4, that with such noise sensitivity, we should
avoid more noise-inducing linear projections until neces-
sary. Our ablations further validate these recommendations.
4.3. A sparse update scheduler
The above two suggestions lead us to design a scheduler that
decides which steps during the reconstruction path, which
enforces Ab
xtiy, achieve desirable fidelity. To control
this, we propose using the fidelity update step with a sparse
parameter schedule {λi}, for which the fidelity update trig-
gers only if λi>0. Furthermore, since available RF models
operate in an embedding space, a Decoder-Encoder wrapper
is needed to convert these embeddings into pixel space. The
complete procedure is summarized in Algorithm 3.
4.4. Our baseline flow model and schedule recom-
mendations
The restoration pipeline. We first convert our input im-
age into a latent space using the ViT autoencoders. These
latents are inverted into a noisy latent distribution through
flow inversion, and we cache the attention map during in-
version as layout guidance. Then, the inverted latent is de-
noised with multi-step inference of the Flux model. Our
SteerFlow schedule decides the steps in which we merge
both the layout guidance from the attention map and pixel-
wise guidance from the degraded prior. In these steps, the
latents are decoded back to image space, and the fidelity
update is performed.
The flow model. We select the Flux-dev model [31] as
our pre-trained RF model, with inversion and reconstruc-
tion paths having N= 30 steps each. We then design our
image inversion and reconstruction pipeline as described in
section 3.2. For each input image, an input caption C1de-
scribes the image with its degradation, and the target caption
C2describes the ideally restored image. We cache the atten-
tion maps of each inversion step for copying in the recon-
5
Figure 5. Colorizing with different {λi}schedules on our
baseline RF model. No explicit conditioning creates hallu-
cinations and loses identity. Having a constant conditioning
with λ= 1 introduces undesired noise.
struction path, specified by ζ. The hyperparameters γand ζ
(as described in Section 3.3) are chosen so that the best-
available trade-off between fidelity and perceptual quality
is obtained, and is kept at (4,4) for all experiments. This
forms our implicit conditioning scheme, on top of which
FlowSteer is applied.
FlowSteer scheduler. To improve fidelity and explicitly
condition the forward model, we design the FS scheduler
{λi}so that a set of validation images achieves a desirable
trade-off between fidelity and reconstruction quality. At its
simplest form, the schedule will be a rectangular window,
with three parameters istart, istop, h defining the schedule:
λi=(hif istart iistop ,
0otherwise.(4)
Schedule recommendations. In general, we recom-
mend that the schedule start enforcing fidelity between
50%-90% of the total steps. The intuition for this is based
on how image generation proceeds from coarse to fine de-
tails along the reconstruction path, as discussed in recent
work [18,40,47]. In the early steps, the color palette is
formed, and the foreground and background are separated.
Towards the end, finer details and textures are formed. Con-
ditioning too late towards the end risks unwanted hallucina-
tions. Conditioning too early introduces a higher effective
noise (as described in Section 4.2) and directs the restora-
tion toward a noisy output. Depending on the reconstruction
task, (istart, istop)can be further fine-tuned for improved per-
formance. The relative strength of the fidelity update can
also be controlled by hto mitigate unwanted artifacts from
the noisy conditioning step. Refer to ablations in Section
5.3 for the effects of (istart, istop)and other alternatives.
5. Experiments
In this section we evaluate FlowSteer on image restoration
tasks. Section 5.1 presents implementation details, Section
5.2 compares our framework with other comparable meth-
ods, and Section 5.3 discusses ablation studies.
5.1. Implementation details
Forward models. We select the restoration restoration
tasks: colorization, denoising, deblurring, and 4x super res-
olution. Each forward model is modeled as an operator A,
and assigned a pesudo-inverse operator Ato implement al-
gorithm 3. Linear transforms are assumed following Wang
et al. [62] for colorization and super resolution. For De-
blurring, we follow Martin et al. [36] with a 61 ×61-pixel
gaussian kernel with blur σb= 3.0for A. A Wiener de-
convolution operator [16,17] with ΛWiener = 0.1is used for
A. For denoising, the forward model is an additive Gaus-
sian noise of σg= 0.2, and we set A=A=Ifor the
fidelity update. We refer to the supplementary for more de-
tails.
Data and metrics. A set of 100 sampled images from
AFHQ [8], CelebAHQ [23,35] data is selected as our test
data. A sample of 20 images from the same source is the
validation split to tune hyper-parameters and run ablations.
This forms a collection of human faces, pets, and wild an-
imals. Our evaluation focuses on pixel-level fidelity and
perceptual quality. PSNR, SSIM metrics measure the fi-
delity/ the consistency of the generated image to the for-
ward model. To report the perceptual similarity between the
predicted and ground truth images, we use LPIPS [65] and
the Cosine similarity between CLIP [43] embeddings. For
Colorization, we first implement a histogram matching al-
gorithm for each color channel to ensure that global effects
such as brightness, saturation shifts, and contrast doesn’t
effect the metrics.
5.2. Comparisons
We compare the restoration capability of FlowSteer with
other flow models: OT-ODE [41], D-Flow [4], Flow-
Priors [67], and PNP-Flow [36]. These methods induce a fi-
delity update in each step and, therefore, result in higher ar-
tifacts in the final restoration. To reproduce PNP-Flow [36],
we use the separate flow models for humans and animals as
provided by the authors. However, for our method, we use
the same flux model for all types of data, and show that a
large pretrained T2I model, such as Flux, can be steered for
faithful image restoration.
Since models with an inversion-reconstruction pipeline
have not been used for image restoration, we reproduce RF-
Edit [60], tune the corresponding (γ, ζ)parameters through
a grid-search, and keep them constant throughout the test
data. This baseline demonstrates that implicit condition-
ing loses the identity of the image, and that FlowSteer can
6
Colorization
Super-resDenoisingDeblurring
Degraded D-Flow [4] FlowPriors [67] PnP-Flow [36] RFEdit [60] Ours Clean
Figure 6. Qualitative comparison of flow-based methods. Columns 2-4 are image restoration models, and column 5 is an
image editing model. The rows show four degradations: tasks focused on information reconstruction (colorization and 4×
super-resolution) and tasks focused on corruption removal (denoising and deblurring). SteerFlow removes degradations, and
details are generated without losing the identity of the subject.
achieve high pixel-level fidelity while maintaining a high
visual quality.
Fidelity and Perceptual quality. FlowSteer preserves
the perceptual qualities of the pre-trained T2I flux model
(such as rich colors, fine details, and textures), while also
achieving a high pixel-level fidelity with the degraded mea-
surement. It enhances degraded details, without losing the
identity of the subject. In the qualitative results of Figure 6,
we see that characteristic facial features, whiskers, teeth,fur
...etc are enhanced, without changing the identity of the sub-
ject. This is reflected quantitatively in Table 1.
5.3. Ablations.
The effect of {λt}.We experimentally validate our prac-
tical design choices outlined in section 4.2. Having a con-
tinuous conditioning with λ= 1 at every step introduces
a noise that is only exacerbated as more steps are taken by
the flow model. Our ablations (Figure 5) verify that ex-
plicit conditioning should generally be done at the middle
of the schedule. The parameters (istart, istart, h)may be fur-
ther tuned for better reconstructions.
Complex schedules. {λi}can be tuned for each task
separately, achieving better reconstruction scores. In table
2we show the parameters of a two-step window, which has
an additional parameter istep, which indicates the step that
changes the conditioning strength λifrom h1to h2. Al-
though more complex schedules can be designed, our re-
sults of table1are based on this two-step schedule.
Avoiding projections to b
x0|ti.In Section 4.2 we rec-
ommend to avoid the linear projection steps b
x0|tib
xti.
This claim is experimentally verified in table 3and figure7.
Note the remaining noise of the final reconstructions. The
flow model is not trained with a noise schedule σtin each
step. Thus, it conflates noise artifacts with necessary tex-
tures, and suggests a velocity update that highlights these
unwanted artifacts.
The Diffusion counterpart. We compare Flowsteer
with DDNM [62], which implements algorithm1on a pre-
trained guided-diffusion network [12]. We observe that
FlowSteer performs better owing to the powerful generative
priors of the flux model. The implementation of the DDNM
takes 100 steps on the reconstruction path [13], while Steer-
Flow samples the flux model N= 30 steps for each recon-
struction task.
7
Methods
Colorization Super resolution Deblurring Denoising
PSNRSSIMLPIPSCLIPPSNRSSIMLPIPSCLIPPSNRSSIMLPIPSCLIPPSNRSSIMLPIPSCLIP
Reference 27.1891 0.8333 0.2334 0.5807 26.7337 0.7829 0.2381 0.4380 28.2371 0.7586 0.2821 0.3095 22.7706 0.4399 0.4427 0.3178
D-Flow [4] 18.5295 0.5554 0.4977 0.2724 23.4039 0.6236 0.4132 0.2984 22.2396 0.6316 0.4104 0.2798 19.0651 0.5232 0.4840 0.2451
OT-ODE [41] N/A 29.1072 0.8296 0.1994 0.6033 31.0467 0.8612 0.1984 0.5908 28.8276 0.8123 0.2505 0.4416
Flow-Priors [67] 21.7889 0.5716 0.4639 0.4808 27.6858 0.7151 0.3036 0.4724 30.4326 0.8350 0.2264 0.5535 28.8047 0.7645 0.2770 0.4479
PnP-flow [36] 27.1620 0.8640 0.2830 0.3482 31.2073 0.8753 0.1755 0.3938 32.7392 0.8840 0.1728 0.4597 30.5899 0.8733 0.2207 0.5103
RFEdit [60] 20.3255 0.6602 0.3648 0.8703 22.3243 0.7362 0.2822 0.7863 22.9813 0.7431 0.2750 0.7914 17.0219 0.4438 0.4545 0.8570
Ours 27.4214 0.8696 0.2081 0.7734 32.8552 0.9022 0.1700 0.6714 32.8749 0.9052 0.1486 0.8177 32.2125 0.8924 0.1822 0.7679
Table 1. Quantitative comparison with flow-based restoration methods. Reference uses the degraded image y. FlowSteer has
high perceptual quality, and high pixel-level fidelity. We highlight the best and second-best per metric.
Degradation General schedule setting Fine-tuned schedule setting per task
Params (istart, iend , h)PSNRLPIPSParams (istart, istep , iend, h1, h2)PSNRLPIPS
Colorization (0.5N, 0.9N, 1) 22.5841 0.3065 (0.4N, 0.50N, 0.95N, 1,0.3) 27.4214 0.2081
Super-resolution (0.5N, 0.9N, 1) 22.8262 0.3753 (0.5N, 0.70N, 0.85N, 1,0.5) 32.8552 0.1700
Deblurring (0.5N, 0.9N, 1) 23.1362 0.3959 (0.7N, 0.80N, 0.90N, 1,0.3) 32.8749 0.1486
Denoising (0.5N, 0.9N, 1) 22.4335 0.4677 (0.5N, 0.75N, 0.95N, 1,0.5) 30.3822 0.2313
Table 2. Comparison of general and fine-tuned FlowSteer schedule settings across degradation tasks.
Degradation with b
x0|tb
xtw/o b
x0|tb
xt
PSNRLPIPSPSNRLPIPS
Colorization 11.8157 0.6256 27.4214 0.2081
Super-resolution 12.0936 0.6168 32.8552 0.1700
Deblurring 12.1171 0.6228 32.8749 0.1486
Denoising 12.1006 0.6467 30.3822 0.2313
Table 3. Comparison of results with and without projec-
tion to b
x0|tin the fidelity step. Direct projection introduces
additional noise that remains in image textures, leading to
lower PSNR and higher LPIPS.
Figure 7. Reconstructed images with and without project-
ing to b
x0|tin the fidelity update step. Projections induce
residual noise that persists in the reconstructed images.
Figure 8. FlowSteer(30 steps) vs DDNM [62], which im-
plements algorithm 1on guided-diffusion [13] (100 steps).
6. Conclusion
Integrating flow-based T2I models into image restoration
remains an open problem despite their efficiency over diffu-
sion. FlowSteer articulates this challenge by enabling flow
models to better respect the physical forward image forma-
tion model. By introducing a simple, yet effective scheduler
during the inference process, FlowSteer ensures that the im-
age restoration task will move along a trajectory that will
lead to a physically consistent image. More importantly,
FlowSteer is training-free, and can be applied to any ex-
isting flow-based T2I framework. Across multiple applica-
tions, including colorization, super-resolution, deblurring,
and denoising, FlowSteer demonstrates both superior pixel-
level fidelity and visual quality.
8
FlowSteer: Conditioning Flow Field for Consistent Image Restoration
Supplementary Material
7. Noise Sensitivity of the Flow Model
In Section 3, we describe that the flow model is sensitive
to noisy intermediate projections. This raises the question:
Are there noise-robust algorithms from diffusion models,
that can be reinterpreted for the flow-model scheme?” To
the best of our knowledge, there are no such algorithms that
can be directly translated to the flow scheme. The main rea-
son is that a diffusion model is trained with a noise sched-
ule {σt}for the time schedule t=tN, ...t0. This acts as
an inherent buffer of randomness at each step, and is used
to design a weight λtin fidelity update steps. To illustrate
this, we present the robust version of Algorithm 1 noise and
describe why it cannot be applied directly to the flow model.
7.1. Noise-robust fidelity update with diffusion
Fidelity update step. Assume a linear forward model
with additive noise y=Ax +η,η N(0, σ2
yI).The
DDNM-style fidelity update[62] for a noisy measurement
would update line 4 in Algorithm 1 as follows.
b
x0|t=Ay+ (IAA)x0|t
=A(Ax +η)+(IAA)x0|t
=x0|tAAx0|ty+Aη,ηN(0, σ2
yI),
(5)
which makes explicit the extra noise term Aη.
Projection back/ Posterior sampling step. As in
DDPM [21] or DDIM [52], a diffusion model samples/pro-
ject back to the reconstruction path at time step t1. This
is line 5 of Algorithm 1:
xt1 Nµt(xt,b
x0|t), σ2
tI.
With the re-parametrization trick,
xt1=µt(xt,b
x0|t)+σtϵt,ϵt N(0,I).(6)
The posterior mean above is
µt(xt,b
x0|t) = ¯αt1βt
1¯αt
| {z }
=: atb
x0|t+αt1¯αt1
1¯αt
xt.
(7)
Damped correction(λt) and variance matching(γt).
Following the “project-and-correct” view, Wang et al. [62]
propose to parametrize and dampen the update weight 1
λtfor the data-fidelity (null-space) update, and adjust the
diffusion noise by σtγt. This changes lines 4 and 5 of
Algorithm 1 to the following:
b
x0|t=x0|tλtAAx0|ty,(8)
xt1=µt(xt,b
x0|t)+γtϵt,ϵt N(0,I).(9)
Since b
x0|tcontains Aη, the sampling mean (7) in-
jects an additional noise term atλtAηwith covariance
a2
tλ2
tσ2
yAA. For the simple linear operators used in the
linear tasks selected, they are treated as isotropic and set γt
to preserve the target posterior variance. The two principles
that guide the adaptive calculation of the new parameters
are as follows:
i) Variance should be preserved in each step t,
γ2
t= max0, σ2
ta2
tλ2
tσ2
y.(10)
ii) λtshould be as close as possible to 1,
λt=
1, σtatσy,
σt
atσy
, σt< atσy.(11)
Interpretation: atis the coefficient on b
x0|tin the posterior
mean, λtcontrols the strength of the fidelity (null-space)
correction, σyis the measurement-noise std., and γtis the
residual diffusion noise after accounting for injected mea-
surement noise.
7.2. Can this scheme extend to flow models?
Reframe equation 10 as γ2
t=σ2
tatλtσy2leads to
the following condition for γt:
γt=(σ2
tatσy2, σtatσy,
0, σt< atσy.(12)
Since flow models do not have a noise schedule σt, it is
equivalent to setting σt= 0. This would imply the second
case (σt< atσy)of equations 11 and 12, leading to γt= 0
and
λt=σt
atσy
= 0.(13)
Thus, in a flow model with no noise schedule σt0
the noise-aware formulation collapses to γt= 0 and λt=
0—i.e., no fidelity correction. Moreover, the core principle
of “preserving the variance in each timestep t does not ap-
ply, since a flow model does not have an inherent variance
1
Figure 9. FlowSteer- an overview. Left: The input degraded image is inverted to noise. The source prompt C1guides the
inversion flow path. Middle: The FlowSteer reconstruction path- which is conditioned by the target caption C2, feature
sharing parametrized by (γ, ζ), and the fidelity update scheduled by the λi. Right: The FlowSteer output is compared
against our baseline flow model without FlowSteer. The implicit conditioning without FlowSteer generates unnecessary
hallucinations. (Zoom in for a better view).
schedule through σt. This shows that the diffusion-style
noise-robust update does not transfer directly to flows. Em-
pirically, we find that fidelity updates in flow models are
sensitive to measurement noise and, if used at all, must be
applied sparingly with careful, task-dependent tuning.
8. More Details on the Flow Model
FlowSteer, and all related baselines were implemented on
an NVIDIA A100 GPU with VRAM 80GB. Resources were
only required for model inference.
8.1. Feature sharing
The Flux-dev [31] model is our base-line image editing
model, that we adapt for image restoration. Both the inver-
sion path and the reconstruction path has N= 30 steps. The
velocity prediction vθ(.)in each step is modeled through
a Diffusion Transformer (DiT) [14] block. The diffusion
transformer has “double-block” layers and “single block”
layers, out of which the “single block” layers are used for
feature sharing. The attention maps that are calculated in
the last ζinversion steps (input to noise) and correspond
to the first ζreconstruction steps (noise to image). Dur-
ing reconstruction, the cached attention maps are used for
the first ζsteps, creating an implicit conditioning that the
reconstructed image should preserve some qualities of the
original image. There are similar approaches in literature
of caching attention maps or just the Values or Key-Query
pairs to drive an edited image/video to be faithful to an input
image. [5,6,15,18,19,27,33,42,60,66]. In our design,
we cache the complete attention map. However, after ex-
perimenting with different feature sharing schemes and grid
searches through hyper-parameters as described in section
3, we find that such implicit conditioning is insufficient for
enforcing the pixel-level fidelity required for image restora-
tion tasks.
8.2. Text prompts
As described in section 3, there are two types of prompts
that implicitly condition the flow field. During inversion,
the source prompt C1is used to describe the degraded im-
age. During reconstruction, the target prompt C2is used to
describe the target image features. The prompts are man-
ually selected by the user. The table 4shows the sample
prompts used for each of the restoration tasks.
9. Details on the Degradation Models
Colorization. The forward operator Amaps RGB to an
achromatic image by averaging channels and repeating the
2
result across three channels. For xR3×H×Wand pixel p,
(Ax)c(p) = 1
3
3
X
k=1
xk(p), c {1,2,3}.
The Moore–Penrose pseudoinverse coincides with A, i.e.
Ay=R 1
3
3
X
k=1
yk!,
where R(·)replicates a single channel three times. Hence,
for any yrange(A)(three identical channels), Ay=y
and AA=A.
Deblurring. The forward operator is a circular Gaussian
blur A(convolution with kernel h). We use the Tikhonov-
regularized (Wiener-type) pseudoinverse parameterized by
λW,
A
λW
(A
A+λWI)1A
,
where A
denotes the adjoint of A. We set λW= 0.1
in all our experiments. The forward model can be imple-
mented using Fourier transforms as F{y}=F{h}F{x},
(or equivalently, as the convolution y=hx). The inver-
sion is implemented in python code in the following form,
A
λWy=F1"F{h}
|F{h}|2+λWF{y}#.
F{h}is the element-wise complex conjugate of F{h}. For
a real, symmetric Gaussian kernel h,F{h}=F{h}.
Super-resolution (×4). The forward operator Adown-
samples an RGB image by average–pooling over non-
overlapping 4×4blocks (per channel) and decimating
by a factor of 4along height and width: y=Ax
R3×H/4×W/4with
yc(u, v) = 1
16
3
X
i,j=0
xc(4u+i, 4v+j).
As a practical pseudo-inverse we use a right-inverse that re-
stores the original spatial size by patch replication (nearest-
neighbor up-sampling):
(Ay)c(i, j) = yc
i/4,j/4,
such that AA=Ion R3×H/4×W/4.
Denoising. We model denoising with the identity forward
operator A=I, so the measurement is simply y=Ax +
η=x+η. Because Iis self-adjoint and full-rank, its
Moore–Penrose pseudoinverse is itself: A=I.
10. Design Details for {λi}
10.1. A Two-Step Schedule
In section 5.3 we describe the effect of the schedule
{λi}. Empirically we observe that a single step-design
as mentioned in table 2 (left) is sufficient for general
reconstructions. However, the reconstruction quality can be
improved by having a scheduler that gradually reduces the
strength of the fidelity update. This empirical observation
is shown in table 2 (right).
The following is the python-style implementa-
tion of a two-step scheduler, with the parameters
istart, istep, iend, h1, h2. Between istart, istep the value of
λiis h1, and between istep, iend the value of λiis h2.
Listing 1. Step-shaped λischedule used in SteerFlow.
1def make_lambda_step_schedule(
2timesteps,
3*, start, step, end,
4h_1=1.0, h_2=0.5,
5final_pad=1,
6):
7N=len(timesteps) - 1
8lam = np.zeros(N, dtype=np.float32)
9if N <= 0:
10 return lam
11
12 i_0 = to_index(start, N)
13 i_1 = to_index(step, N)
14 # make end exclusive
15 i_2 = to_index(end, N) + 1
16
17 # order & clamp
18 i_start, i_step = min(i_0, i1), max(i0, i1)
19 i_step, i_end = min(i1, i2), max(i1, i2)
20 i_start = np.clip(i_start, 0, N)
21 i_step = np.clip(i_step, 0, N)
22 i_end = np.clip(i_end, 0, N)
23
24 if i_start < i_step:
25 lam[i_start:i_step] = h_1
26 if i_step < i_end:
27 lam[i_step:i_end] = h_2
28
29 if 0 < final_pad < N:
30 lam[-final_pad:] = 0.0
31
32 # keep a peak=1 if padding nuked it
33 if lam.max()<=0and N - final_pad - 1 >=
0:
34 lam[N - final_pad - 1] = 1.0
35 else:
36 lam /= max(1.0, float(lam.max()))
37 return lam
10.2. Effect of the Fidelity update
The effect of the fidelity update is visually shown in the
figures 10 and 11. If the update is started too late (istart is
too late), then the reconstruction will have artifacts from
3
the hallucinated details. (figure 10). If the update is started
too early(istart is too early), then the desired level of hallu-
cinations (such as rich colors for colorization) have not yet
formed. This makes the reconstruction have over smooth-
ness, and loose color and texture details. (figure 11).
10.3. Final padding
A final padding parameter is added to enforce that at least
a few steps of the reconstruction path does not end with a
fidelity update. This is another empirical design decision,
where we observe that having some steps without fidelity
updates helps to smoothen out some of the noise artifacts.
However, if the final padding is too high, the image can
get oversmoothed. This is effectively the same as having
a fidelity update too early on in the reconstruction path, as
shown in figure 11.
11. Limitations and Future Directions
11.1. Empirical schedule
The Flow Steer schedule recommendations are given
through empirical fine-tuning. For the four restoration
tasks together, we recommend a one-step schedule in Ta-
ble 2(left). For each task separately, we recommend a two-
step schedule in Table 2(right). Even with this task-specific
schedule recommendation, there are cases where the sched-
ule does not work for every image in the dataset. This can
be seen in some of the failure cases for the current Steer-
Flow scheme as shown in Figure 12.
A future directions would be to extend this schedule to
be adaptable for each individual image, so that manual tun-
ing can be avoided. The key is to estimate the noise in-
jected by the fidelity update and trigger the update at the
step whose model noise budget best matches it, while ad-
justing the update strength {λt}accordingly. This would
lead to an image-adaptive scheduler, and will not have to
depend on heuristics.
11.2. Artifacts from explicit conditioning
The explicit conditioning of FlowSteer relies on Pseudoin-
verse operators. This has the inherent limitation of intro-
ducing artifacts, such as shown in figure 13. Apart from
the noisy artifacts described above, there are instances of
blocking-artifacts from the upsampling operation in super-
resolution, and ringing-artifacts from the Weiner filter in de-
blurring.
12. More Visual Results
12.1. More visual results on restoration tasks
More images on the restoration tasks, highlighting that we
preserve both pixel-level fidelity and rich perceptual quality.
Zoom in and highlight differences. Refer figure 15.
12.2. Plug-and-Play(PnP) on pre-trained Flow mod-
els
We recreate the results of PnP-Flow [36] to verify if a Plug-
and-play type algorithm can achieve visually appealing im-
ages after image restoration. It was observed that it per-
forms well only when the underlying model is the provided
model from the authors, which is specifically trained for the
class of images being tested. For example, with the pre-
trained flow model (which has been trained on a cat dataset)
it gives plausible results on cat images. However, when
flux-dev [31] (which is trained on a much broader class of
data) is used as the underlying flow model, the Plug-and-
play method fails to converge to a plausible reconstruction.
A grid search was run to select the hyper-parameters of the
PnP algorithm and the values: (α, γ, ηdn) = (0.3,0.8,0.3)
were selected. This is demonstrated in Figure 14.
4
Figure 10. Visualising the reconstruction path with the fidelity update being late. Before the fidelity update has been appled
(b
xt24 ), the flux model has already hallucinated fine textures and colors. Immediately after activation (b
xt25 ), the fidelity update
steers the path toward the measurement. However, some residual artifacts persist in the final reconstruction.
5
Type # Source prompt C1Target prompt C2
Colorization
Pets 1 A black and white image of a cat. The cat is white with
black patches. The background is dark. The nose of the cat
is pink. The eyes of the cat are green.
A colorful image of a cat. The cat is white with black
patches. The background is dark. The nose of the cat is pink.
The eyes of the cat are green.
Wild 2 A black and white image of a cheetah. The fur of the leopard
is bright, with dark spots. The leopard has dark streaks
running down from its dark eyes.
A colored image of a cheetah. The fur of the leopard is
golden yellow, with dark spots. The leopard has dark streaks
running down from its dark eyes.
Wild 3 A black and white image of a fox. The fox has brown fur
with white streaks. The nose of the fox is black. The eyes of
the fox are dark brown.
A colorful image of a fox. The fox has brown fur with white
streaks. The nose of the fox is black. The eyes of the fox are
dark brown.
Humans 4 A black and white image of a man. A colorful image of a man. The man has black eyes.
Deblurring
Pets 1 A blurred image of a cat. The cat has white fur with black
spots.
A sharp image of a cat. The cat has white fur with black
spots. Highly detailed, taken using a Canon EOS R camera,
hyper detailed photo-realistic maximum detail.
Wild 2 A blurred image of a lion. The lion has golden colored fur
and dark brown eyes.
A sharp image of a lion. The lion has golden colored fur and
dark brown eyes. Highly detailed, taken using a Canon EOS
R camera, hyper detailed photo-realistic maximum detail.
Wild 3 A blurred image of a leopard. The leopard has brown fur and
black patches.
A sharp image of a leopard. The leopard has brown fur and
black patches. Highly detailed, taken using a Canon EOS R
camera, hyper detailed photo-realistic maximum detail.
Humans 4 A blurred image of a man. A sharp image of a man. Highly detailed, taken using a
Canon EOS R camera, hyper detailed photo-realistic
maximum detail.
Super-resolution
Pets 1 A low resolution image of a dog. The dog is brown with
white patches.
A sharp, high resolution image of a dog. The dog is brown
with white patches. Highly detailed, taken using a Canon
EOS R camera, hyper detailed photo-realistic maximum
detail.
Wild 2 A low resolution image of a lion. The lion has golden
colored fur and dark brown eyes.
A sharp, high resolution image of a lion. The lion has golden
colored fur and dark brown eyes. Highly detailed, taken
using a Canon EOS R camera, hyper detailed photo-realistic
maximum detail.
Humans 3 A low resolution image of a woman. A sharp, high resolution image of a woman. Highly detailed,
taken using a Canon EOS R camera, hyper detailed
photo-realistic maximum detail.
Denoising
Pets 1 A noisy image of a dog. The dog is brown with white
patches.
A clean, noise free image of a dog. The dog is brown with
white patches. Highly detailed, taken using a Canon EOS R
camera, hyper detailed photo-realistic maximum detail.
There are no RGB noise artifacts.
Wild 2 A noisy image of a cheetah. The fur of the cheetah is bright,
with dark spots. The cheetah has dark streaks running down
from its dark eyes.
A clean, noise free image of a cheetah. The fur of the
cheetah is bright, with dark spots. The cheetah has dark
streaks running down from its dark eyes. Highly detailed,
taken using a Canon EOS R camera, hyper detailed
photo-realistic maximum detail. There are no RGB noise
artifacts.
Human 3 A noisy image of a man. A clean, noise free image of a man. Highly detailed, taken
using a Canon EOS R camera, hyper detailed photo-realistic
maximum detail. There are no RGB noise artifacts.
Table 4. Sample prompt pairs per task. “Source” describes the input (e.g., grayscale, low-res, noisy, blurred); “Target”
describes the intended restored image that is used to implicitly steer the flux model.
6
Figure 11. Visualising the reconstruction path with the fidelity update being early. Before the fidelity update has been appled
(b
xt9), the flux model has not completely formed the color pallette and textures. The fidelity update conditions the path to
have less color(b
xt10 ), and the large number of steps without the fidelity update over smoothens the result. The same effect
will be seen when a higher number of flux steps are padded at the end of the schedule, as described in Section 10.3.
Figure 12. Noise artifacts are seen in some of the final reconstructed image with the two-step scheduler. The restoration
tasks corresponding to these images are colorization (left and center) and deblurring (right). Fine tuning the scheduler on
a per-image basis may reduce these artifacts as discussed in Section 11. Zoom in to clearly see the noise artifacts in some
pixels.
7
Figure 13. Artifacts resulting from Pseudo-inverse operators in FlowSteer as discussed in Section 11. Blocking artifacts
affect edges in super-resolution(left). The Weiner filter creates ringing effects in deblurring(center and right). Zoom in to
clearly see the artifacts.
8
Figure 14. Plug and play approach on different types of flow models. Although it produces plausible results on a flow model
trained on cats(left), it does poorly when implemented on a flow model trained on other general data. This is described in
Section 12.
9
Colorization
Super-resolutionDenoisingDeblurring
Degraded D-Flow [4] FlowPriors [67] PnP-Flow [36] RFEdit [60] Ours Clean
Figure 15. More qualitative comparisons of flow-based methods against FlowSteer. The restoration models in columns 2-
4 have undesirable artifacts such as excessive blur. The image editing baseline in column 5 has poor fidelity. FlowSteer
achieves better pixel-level fidelity, while generating viusally appealing details. Zoom in for better comparisons.
10
Acknowledgements: We are thankful for Mykhailo
Tsysin and Yu Yuan for the helpful discussions.
References
[1] Michael S Albergo, Nicholas M Boffi, and Eric Vanden-
Eijnden. Stochastic interpolants: A unifying framework
for flows and diffusions. arXiv preprint arXiv:2303.08797,
2023. 1
[2] Omri Avrahami, Dani Lischinski, and Ohad Fried. Blended
diffusion for text-driven editing of natural images. In Pro-
ceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition (CVPR), pages 18208–18218, 2022.
1
[3] Stephen Batifol, Andreas Blattmann, Frederic Boesel, Sak-
sham Consul, Cyril Diagne, Tim Dockhorn, Jack English,
Zion English, Patrick Esser, Sumith Kulal, et al. Flux. 1
kontext: Flow matching for in-context image generation and
editing in latent space. arXiv e-prints, pages arXiv–2506,
2025. 2
[4] Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel
Singer, and Yaron Lipman. D-flow: Differentiating
through flows for controlled generation. arXiv preprint
arXiv:2402.14017, 2024. 3,6,7,8,10
[5] Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xi-
aohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mu-
tual self-attention control for consistent image synthesis and
editing. In IEEE/CVF International Conference on Com-
puter Vision (ICCV), pages 22560–22570, 2023. 2
[6] Duygu Ceylan, Chun-Hao P Huang, and Niloy J Mi-
tra. Pix2video: Video editing using image diffusion. In
IEEE/CVF International Conference on Computer Vision
(ICCV), pages 23206–23217, 2023. 2
[7] Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune
Gwon, and Sungroh Yoon. Ilvr: Conditioning method for
denoising diffusion probabilistic models. 2021 IEEE/CVF
International Conference on Computer Vision (ICCV), pages
14347–14356, 2021. 2
[8] Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha.
Stargan v2: Diverse image synthesis for multiple domains.
In Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), pages 8188–8197,
2020. 6
[9] Hyungjin Chung, Jeongsol Kim, Michael T Mccann, Marc L
Klasky, and Jong Chul Ye. Diffusion posterior sam-
pling for general noisy inverse problems. arXiv preprint
arXiv:2209.14687, 2022. 1,2
[10] Yusuf Dalva, Kavana Venkatesh, and Pinar Yanardag. Fluxs-
pace: Disentangled semantic editing in rectified flow mod-
els. In Proceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition (CVPR), pages 13083–
13092, 2025. 3
[11] Yingying Deng, Xiangyu He, Changwang Mei, Peisong
Wang, and Fan Tang. Fireflow: Fast inversion of rec-
tified flow for image semantic editing. arXiv preprint
arXiv:2412.07517, 2024. 3
[12] Prafulla Dhariwal and Alex Nichol. Diffusion models beat
gans on image synthesis. In Advances in Neural Information
Processing Systems, 2021. 1,7
[13] Prafulla Dhariwal and Alex Nichol. guided-diffusion.
https : / / github . com / openai / guided -
diffusion, 2021. GitHub repository. 7,8
[14] Patrick Esser, Sumith Kulal, Andreas Blattmann, Rahim
Entezari, Jonas M¨
uller, Harry Saini, Yam Levi, Dominik
Lorenz, Axel Sauer, Frederic Boesel, et al. Scaling recti-
fied flow transformers for high-resolution image synthesis.
In Forty-first international conference on machine learning,
2024. 2
[15] Michal Geyer, Omer Bar-Tal, Shai Bagon, and Tali Dekel.
Tokenflow: Consistent diffusion features for consistent video
editing. arXiv preprint arXiv:2307.10373, 2023. 2
[16] Rafael C. Gonzalez and Richard E. Woods. Digital Image
Processing. Pearson, 4 edition, 2018. 6
[17] Per Christian Hansen, James G. Nagy, and Dianne P.
O’Leary. Deblurring Images: Matrices, Spectra, and Fil-
tering. SIAM, 2006. 6
[18] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman,
Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt im-
age editing with cross attention control. The Eleventh In-
ternational Conference on Learning Representations (ICLR),
2023. 1,6,2
[19] Amir Hertz, Andrey Voynov, Shlomi Fruchter, and Daniel
Cohen-Or. Style aligned image generation via shared atten-
tion. In Proceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition (CVPR), pages 4775–
4785, 2024. 2
[20] Jonathan Ho and Tim Salimans. Classifier-free diffusion
guidance, 2021. 3,4
[21] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising dif-
fusion probabilistic models. In Advances in Neural Informa-
tion Processing Systems, 2020. 1,2
[22] Guanlong Jiao, Biqing Huang, Kuan-Chieh Wang, and Ren-
jie Liao. Uniedit-flow: Unleashing inversion and editing in
the era of flow models. arXiv preprint arXiv:2504.13109,
2025. 3
[23] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen.
Progressive growing of gans for improved quality, stability,
and variation. arXiv preprint arXiv:1710.10196, 2017. 6
[24] Bahjat Kawar, Gregory Vaksman, and Michael Elad. Snips:
Solving noisy inverse problems stochastically. Advances in
Neural Information Processing Systems, 34:21757–21769,
2021. 2
[25] Bahjat Kawar, Michael Elad, Stefano Ermon, and Jiaming
Song. Denoising diffusion restoration models. Advances
in Neural Information Processing Systems, 35:23593–23606,
2022. 1,2
[26] Jeongsol Kim, Yeobin Hong, Jonghyun Park, and Jong Chul
Ye. Flowalign: Trajectory-regularized, inversion-free flow-
based image editing. arXiv preprint arXiv:2505.23145,
2025. 2
[27] Jimyeong Kim, Jungwon Park, Yeji Song, Nojun Kwak, and
Wonjong Rhee. Reflex: Text-guided editing of real images
in rectified flow via mid-step feature extraction and attention
11
adaptation. In Proceedings of the IEEE/CVF International
Conference on Computer Vision (ICCV), 2025. 2
[28] Diederik P. Kingma and Max Welling. Auto-encoding varia-
tional bayes. In International Conference on Learning Rep-
resentations (ICLR), 2014. 4
[29] Dehong Kong, Fan Li, Zhixin Wang, Jiaqi Xu, Renjing
Pei, Wenbo Li, and WenQi Ren. Dual prompting image
restoration with diffusion transformers. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), pages 12809–12819, 2025. 2
[30] Vladimir Kulikov, Matan Kleiner, Inbar Huberman-
Spiegelglas, and Tomer Michaeli. Flowedit: Inversion-
free text-based editing using pre-trained flow models. In
Proceedings of the IEEE/CVF International Conference on
Computer Vision (ICCV), pages 19721–19730, 2025. 2
[31] Black Forest Labs. FLUX. https://github.com/
black-forest-labs/flux, 2024. 2,3,4,5
[32] Yaron Lipman, Ricky TQ Chen, Heli Ben-Hamu, Maximil-
ian Nickel, and Matt Le. Flow matching for generative mod-
eling. arXiv preprint arXiv:2210.02747, 2022. 2
[33] Shaoteng Liu, Yuechen Zhang, Wenbo Li, Zhe Lin, and Jiaya
Jia. Video-p2p: Video editing with cross-attention control.
In Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition (CVPR), pages 8599–8608,
2024. 2
[34] Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow
straight and fast: Learning to generate and transfer data with
rectified flow. arXiv preprint arXiv:2209.03003, 2022. 2
[35] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang.
Deep learning face attributes in the wild. In Proceedings
of the IEEE International Conference on Computer Vision
(ICCV), pages 3730–3738, 2015. 6
[36] S´
egol`
ene Martin, Anne Gagneux, Paul Hagemann, and
Gabriele Steidl. Pnp-flow: Plug-and-play image restoration
with flow matching. arXiv preprint arXiv:2410.02423, 2024.
2,3,6,7,8,4,10
[37] Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jia-
jun Wu, Jun-Yan Zhu, and Stefano Ermon. SDEdit: Guided
image synthesis and editing with stochastic differential equa-
tions. In International Conference on Learning Representa-
tions (ICLR), 2022. 1,2
[38] Chong Mou, Xintao Wang, Liangbin Xie, Yanze Wu, Jian
Zhang, Zhongang Qi, and Ying Shan. T2i-adapter: Learning
adapters to dig out more controllable ability for text-to-image
diffusion models. In Proceedings of the AAAI conference on
artificial intelligence, pages 4296–4304, 2024. 1
[39] Alexander Quinn Nichol and Prafulla Dhariwal. Improved
denoising diffusion probabilistic models. In International
conference on machine learning, pages 8162–8171. PMLR,
2021. 2
[40] Yong-Hyun Park, Mingi Kwon, Jaewoong Choi, Junghyo
Jo, and Youngjung Uh. Understanding the latent space of
diffusion models through the lens of riemannian geometry.
Advances in Neural Information Processing Systems, 36:
24129–24142, 2023. 6
[41] Ashwini Pokle, Matthew J. Muckley, Ricky T. Q. Chen, and
Brian Karrer. Training-free linear image inverses via flows.
Transactions on Machine Learning Research (TMLR), 2024.
Often referred to as OT-ODE. 6,8
[42] Chenyang Qi, Xiaodong Cun, Yong Zhang, Chenyang Lei,
Xintao Wang, Ying Shan, and Qifeng Chen. Fatezero: Fus-
ing attentions for zero-shot text-based video editing. In
IEEE/CVF International Conference on Computer Vision
(ICCV), pages 15932–15942, 2023. 2
[43] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya
Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry,
Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning
transferable visual models from natural language supervi-
sion. In International conference on machine learning, pages
8748–8763. PmLR, 2021. 6
[44] Robin Rombach, Andreas Blattmann, Dominik Lorenz,
Patrick Esser, and Bj¨
orn Ommer. High-resolution image
synthesis with latent diffusion models. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), pages 10684–10695, 2022. 1,4
[45] Litu Rout, Yujia Chen, Nataniel Ruiz, Constantine Carama-
nis, Sanjay Shakkottai, and Wen-Sheng Chu. Semantic im-
age inversion and editing using rectified stochastic differen-
tial equations. In The Thirteenth International Conference
on Learning Representations (ICLR)s, 2025. 1,3
[46] Chitwan Saharia, Jonathan Ho, William Chan, Tim Sali-
mans, David J Fleet, and Mohammad Norouzi. Image super-
resolution via iterative refinement. IEEE transactions on
pattern analysis and machine intelligence, 45(4):4713–4726,
2022. 1
[47] Ketan Suhaas Saichandran, Xavier Thomas, Prakhar
Kaushik, and Deepti Ghadiyaram. Progressive prompt de-
tailing for improved alignment in text-to-image generative
models. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition (CVPR) Work-
shops, 2025. 6
[48] Axel Sauer, Frederic Boesel, Tim Dockhorn, Andreas
Blattmann, Patrick Esser, and Robin Rombach. Fast high-
resolution image synthesis with latent adversarial diffusion
distillation. In SIGGRAPH Asia 2024 Conference Papers,
pages 1–11, 2024. 2
[49] Johannes Schusterbauer, Ming Gui, Frank Fundel, and Bj¨
orn
Ommer. Diff2flow: Training flow matching models via dif-
fusion model alignment. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition
(CVPR), pages 28347–28357, 2025. 2
[50] Andy Shih, Suneel Belkhale, Stefano Ermon, Dorsa Sadigh,
and Nima Anari. Parallel sampling of diffusion models. Ad-
vances in Neural Information Processing Systems, 36:4263–
4276, 2023. 1
[51] Bowen Song, Soo Min Kwon, Zecheng Zhang, Xinyu Hu,
Qing Qu, and Liyue Shen. Solving inverse problems with
latent diffusion models via hard data consistency. The
Eleventh International Conference on Learning Representa-
tions (ICLR), 2024. 1
[52] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denois-
ing diffusion implicit models. arXiv:2010.02502, 2020. 1,
2
[53] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Ab-
hishek Kumar, Stefano Ermon, and Ben Poole. Score-based
12
generative modeling through stochastic differential equa-
tions. arXiv preprint arXiv:2011.13456, 2020. 2
[54] Stability.ai. Stable Diffusion 3. https://stability.
ai / news / stable - diffusion - 3 - research -
paper, 2024. 2
[55] Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali
Dekel. Plug-and-play diffusion features for text-driven
image-to-image translation. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition
(CVPR), pages 1921–1930, 2023. 1
[56] Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali
Dekel. Plug-and-play diffusion features for text-driven
image-to-image translation. In Proceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition
(CVPR), pages 1921–1930, 2023. 1
[57] Singanallur V Venkatakrishnan, Charles A Bouman, and
Brendt Wohlberg. Plug-and-play priors for model based re-
construction. In 2013 IEEE global conference on signal and
information processing, pages 945–948. IEEE, 2013. 2
[58] Bram Wallace, Akash Gokul, and Nikhil Naik. Edict: Exact
diffusion inversion via coupled transformations. In Proceed-
ings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), pages 22532–22541, 2023. 2
[59] Chong Wang, Lanqing Guo, Zixuan Fu, Siyuan Yang, Hao
Cheng, Alex C Kot, and Bihan Wen. Reconciling stochas-
tic and deterministic strategies for zero-shot image restora-
tion using diffusion model in dual. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), pages 23207–23216, 2025. 2
[60] Jiangshan Wang, Junfu Pu, Zhongang Qi, Jiayi Guo, Yue Ma,
Nisha Huang, Yuxin Chen, Xiu Li, and Ying Shan. Tam-
ing rectified flow for inversion and editing. arXiv preprint
arXiv:2411.04746, 2024. 2,3,6,7,8,10
[61] Siyuan Wang, Yuyao Yan, Xi Yang, Rui Zhang, Qiufeng
Wang, Guangliang Cheng, and Kaizhu Huang. Point2pix-
zero: Point-driven refined diffusion for multi-object image
editing. Pattern Recognition, page 112041, 2025. 1
[62] Yinhuai Wang, Jiwen Yu, and Jian Zhang. Zero-shot image
restoration using denoising diffusion null-space model. The
Eleventh International Conference on Learning Representa-
tions (ICLR), 2023. 2,6,7,8,1
[63] Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin
Chen, Xiaoyan Sun, Dong Chen, and Fang Wen. Paint by
example: Exemplar-based image editing with diffusion mod-
els. In Proceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition (CVPR), pages 18381–
18391, 2023. 1
[64] Xiaofeng Yang, Cheng Chen, Xulei Yang, Fayao Liu, and
Guosheng Lin. Text-to-image rectified flow as plug-and-play
priors. arXiv preprint arXiv:2406.03293, 2024. 1,3
[65] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shecht-
man, and Oliver Wang. The unreasonable effectiveness of
deep features as a perceptual metric. In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern
Recognition (CVPR), 2018. 6
[66] Yuechen Zhang, Jinbo Xing, Eric Lo, and Jiaya Jia. Real-
world image variation by aligning diffusion inversion chain.
Advances in Neural Information Processing Systems, 36:
30641–30661, 2023. 2
[67] Yasi Zhang, Peiyu Yu, Yaxuan Zhu, Yingshan Chang, Feng
Gao, Ying Nian Wu, and Oscar Leong. Flow priors for lin-
ear inverse problems via iterative corrupted trajectory match-
ing. Advances in Neural Information Processing Systems,
37:57389–57417, 2024. 3,6,7,8,10
[68] Yuanzhi Zhu, Kai Zhang, Jingyun Liang, Jiezhang Cao, Bi-
han Wen, Radu Timofte, and Luc Van Gool. Denoising dif-
fusion models for plug-and-play image restoration. In Pro-
ceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition (CVPR), pages 1219–1229, 2023. 2
13