You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Apr 10, 2024. It is now read-only.
I want to know why lucid can iterate when feeding image that its size dynamically changes by transform_f per every iteration. For example, if I run the below code,
import lucid.optvis.transform as lucid_t
from lucid.optvis. render import make_transform_f
import numpy as np
#from lucid.optvis transform.py
standard_transforms = [
lucid_t.pad(12, mode='constant', constant_value=.5),
lucid_t.jitter(8),
lucid_t.random_scale([1 + (i-5)/50. for i in range(11)]),
lucid_t.random_rotate(list(range(-10, 11)) + 5*[0]),
lucid_t.jitter(4),
]
prior = np.random.normal(size=(1,224,224,3))
input_image = K.variable(prior)
transform_f = make_transform_f(standard_transforms)
transformed = transform_f(input_image)
post = K.eval(transformed)
print(prior.shape)
print(post.shape)
I get
(1, 224, 224, 3)
(1, 221, 221, 3)
*I know values changes everytime because of the random scale function.
feature maximization is a gradient ascent so in iteration t, with above transformation
image(t+1) = transform_f(image(t)) + gradient
where their tensor xy size is
image(t) (224,224)
image(t+1) (221,221)
however the final output of lucid which generates after iterations is a image of size (224,224) no matter what if I start with (224,224),
so is lucid either doing the following or any other?
after transform_f, resizing the size back to (224,224) and calculating the gradients by that size in every step.
or
calculate the gradients in size (221,221), update the image by adding gradients, resizing back to (224,224) before the next calculation step.
would be helpful if you can tell me where the codes that lucid is doing inside.