I have some 2D diffraction data stored as a numpy array. In reciprocal space however the data is not uniform, so I have two additional arrays with the x and y reciprocal space coordinates. Each subsequent image is rotated.
When the data is regular I can just use skimage.transform.rotate, but since this is not uniform and the positions need the be maintained I've resorted in the following:
Define the transformation array (S)
import numpy as np
import fabio
import matplotlib.pyplot as plt
#get the data and some experimental parameters
obj = fabio.open(filename)
Data=obj.data #2D diffraction data, but can be any 32 bit image
DataSize=obj.shape
DetectorDistance=122
Pixel = 0.1
BeamPosXY=[float(i) for i in obj.header["PXD_SPATIAL_BEAM_POSITION"].split(' ')] #PixelValues of the beam position on the image (centre of the diffraction
#make arrays to represent the detector image in twotheta-gamma space and reciprocal S space
xx1=np.linspace(0,Pixel*DataSize[1],DataSize[1])
yy1=np.linspace(0,Pixel*DataSize[0],DataSize[0])
xx, yy = np.meshgrid(xx1,yy1)
NewXX=xx-(BeamPosXY[0]*Pixel)
NewYY=yy-(BeamPosXY[1]*Pixel)
height=DetectorDistance*np.sin(NewXX/DetectorDistance)
TwoTheta=np.arcsin((np.sqrt(NewYY**2+height**2))/np.sqrt(NewYY**2+DetectorDistance**2)) #radians
Gamma=np.arctan(NewYY/(height)) #radians ---- need to convert chi into radians.
S=(2*np.sin(TwoTheta/2))/Wavelength
Define a large array to fit the transformed data
xS=np.linspace(-2*DataSize[0]*Spix,2*DataSize[0]*Spix,4*DataSize[0]+1)
yS=np.linspace(-2*DataSize[1]*Spix,2*DataSize[1]*Spix,4*DataSize[1]+1)
xxS, yyS = np.meshgrid(xS,yS)
The rotation is a rotation of Chi degrees to the Gamma array on S as follows
Sx=S*np.cos(Gamma+np.radians(Chi))
Sy=S*np.sin(Gamma+np.radians(Chi))
Then for each data frame, loop through each data pixel, get the x,y coordiantes of that pixel (Sx, Sy), find the position in my output array, and put the value in this place as follows:
for a in range(DataSize[0]):
for b in range(DataSize[1]):
ImageSx=Sx[a,b] #get the x coordinates of the pixel
ImageSy=Sy[a,b] #get the y coordinates of the pixel
OutSx=np.where(np.abs(xxS-ImageSx)<tol) #find the indices of the pixel coordinates in the output array, tol is 1/2 a pixel coordinate
OutSy=np.where(np.abs(yyS[OutSx]-ImageSy)<tol)[0][0] #find the y index of the pixel coordinates in the output array
OutSx=OutSx[1][0] #get just the one x-coordinate
DataOut[OutSx,OutSy]=np.max((DataOut[OutSx,OutSy],Data[a,b])) #put the data value where it belongs
As an example, this is a single image with a -90 degree transformation, plotted as a scatter plot - you can see the space is not a regular shape but curved.
Single image mapped to (OutSx, OutSy)
This works, but it takes ~10ms per image pixel.
%timeit bigSx=np.where(np.abs(xxS-ImageSx)<tol); np.where(np.abs(yyS[bigSx]-ImageSy)<tol)
9.71 ms ± 251 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
With 10 images 680k pixels this is hours.
I've tried boolean masking but it's even slower
%timeit BoolY=np.abs(yyS-ImageSy)<tol; BoolX=np.abs(xxS-ImageSx)<tol; np.logical_and(BoolX,BoolY).nonzero()
12.5 ms ± 94.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
I feel like there should be a better and faster way of doing this by applying the position arrays Sx and Sy to the data?
I tried nested loops of where and boolean masks, they worked but I was hoping for a faster solution.