Difference between revisions of "DF Thesis"
(22 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
=Abstract= | =Abstract= | ||
+ | \qquad{}The thesis reports the results of an improved track reconstruction program that is used to determine the momentum and interaction vertex of particles scattering in experiments performed using the CLAS12 Detector at Jefferson Lab. Particles are reconstructed from the CLAS12 Drift Chamber and scintillator detector sub-systems. The improved algorithm performs a three dimensional grid search in momentum space in an attempt to minimize the distance of closest approach between the particle’s path and the transverse location of the incident electrons that scatter from nucleons in a cryogenic target. Analysis has shown that the new algorithm is capable of improving the reconstruction in the X and Y directions (orthogonal to the incident electrons) by a factor of no less than ten, while at least maintaining the quality of the reconstruction in the Z direction. | ||
+ | |||
+ | \qquad{}An improvement in the reconstruction of the Z direction allows for statistical certainty in stating which part of the physics target an interaction occurred. This is of particular importance in the polarized SIDIS dual target, a physics target with two distinct sub-targets that can be filled with ammonia and polarized in different directions. Being able to state which of the two sub-targets an event occurred in paramount to being able to distinguish the difference in interactions between the two. Analysis of the proposed algorithm shows that one can resolve the two targets with if they are separated by at least 4cm. | ||
=Introduction= | =Introduction= | ||
− | The CLAS12 is a charged and neutral particle detector constructed in Hall B | + | The CLAS12 is a charged and neutral particle detector constructed in Jefferson Lab's Hall B. The detector is used to reconstruct nuclear physics interactions that result when GeV energy electrons interact with select targets. CLAS12's precursor, the original CLAS (CEBAF Large Acceptance Spectrometer), was built to facilitate measurements of excited states of nucleons and characterize nucleon-nucleon correlations. Contemporaneous with the 12GeV upgrade to Jefferson Lab's accelerator, CLAS was removed and CLAS12 was built in its place.\autocite{hallBMainPage} |
+ | |||
+ | CLAS12 is designed to not only continue the mission of its predecessor, but to also measure generalized parton distributions of nucleons and search for exotic forms of matter. It is currently slated to perform experiments for more than ten years into the future and will likely continue to contribute even after that.\autocite{hallBMainPage} Figure \ref{fig:clas12-design} shows a model of the detector with labels detailing some of the sub detector's and other components. It is important to note here that CLAS12 is not a singular detector, but instead a suite of detectors which can be correlated together in order to more precisely reconstruct the interactions that occur within the physics target. | ||
− | CLAS12 | + | Alongside the construction of CLAS12, several software packages have been created to handle simulating the detector response, reconstructing events from detector the detector response, and visualizing the reconstructed interactions. For simulation, the CLAS Collaboration has developed a package that provides all of the necessary information to GEMC\autocite{gemc} (GEant4 Monte-Carlo), a general program designed to make interfacing with Geant4\autocite{geant4} easier. The program used for viewing events is called CED (CLAS Event Display), and is a continuation of a program originally designed for use with the CLAS. Most importantly for this document, as it is the topic of discussion, is the reconstruction software. Known as clas12-offline-software\autocite{coatjava}, it is a collection of various (mostly) java programs used to process the data collected from CLAS12 and reconstruct the observed particle mass, momentum, and location within the detector. |
− | + | This thesis is two-fold in purpose. First, it is intended to provide an understanding of the current methods used to reconstruct particle trajectories from the drift chamber sub-detectors of CLAS12. This includes both a theoretical understanding of the underlying algorithms and physics as well as the actual implementation of those ideas. The second purpose is to present an additional method of reconstruction which takes the already existing drift chamber reconstruction output and further refines it so that interactions which arise from a rastered beam may be reconstructed more accurately. | |
− | + | It should be noted that, in the context of this thesis, the primary motivation behind the development of this new reconstruction algorithm is to allow for the reconstruction of events from a physics target that contains two polarized target regions separated by some distance. More details will be given about the target in the Simulation and Reconstruction chapter. Semi-Inclusive Deep Ineslatic Scattering (SIDIS) events are of particle interest. SIDIS events only consider the scattered incident particle, an electron in this case, and the highest energy hadron produced by the interaction, if any. The core idea is that when the scattering particle interacts with a parton in the struck particle, there will be a primary product hadron accompanied by a hadron shower. The kinematics of this primary product and the scattering particle are believed to encode most of the information about the struck parton. | |
IMG | IMG | ||
=Theory= | =Theory= | ||
− | + | This chapter is meant to provide sufficient knowledge and background to understand the general workings of the CLAS12 drift chambers, the most critical elements of the reconstruction process, and the rasterization of the scattering electrons. With regards to the drift chambers, particular attention is paid to the geometry and physical construction as that directly effects the acceptance of the detector. The elements of the most importance discussed here are the concepts of Runge-Kutta approximations and Kalman Fitters. Both are used heavily in the reconstruction process. And finally, the discussion about rastered beams gives background as to why this project exists to begin with. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
==Drift Chambers== | ==Drift Chambers== | ||
Line 26: | Line 24: | ||
===Typical Construction and Principle of Operation=== | ===Typical Construction and Principle of Operation=== | ||
− | Drift chambers are charged particle detectors designed to measure the trajectory of a particle's path through the detector. This is the primary measurement used to | + | Drift chambers are charged particle detectors that are designed to measure the trajectory of a particle's path through the detector. This is the primary measurement used to calculate quantities of scientific interest by means of inverse kinematics. |
− | Typically, drift chambers are constructed out of a chamber filled with an ionizable gas and planes of conducting wire. The wire planes are arranged to be ideally perpendicular to the particle's direction of motion and come in two alternating types: cathode and anode (sense) planes. The planes are designed to create a nearly-uniform static electric field which will drive electrons towards the sense planes. As described in the next paragraph, the sense wires collect ionized electrons producing an electronic signal | + | Typically, drift chambers are constructed out of a chamber filled with an ionizable gas and planes of conducting wire. The wire planes are arranged to be ideally perpendicular to the particle's direction of motion and come in two alternating types: cathode planes and anode (sense) planes. The planes are designed to create a nearly-uniform static electric field which will drive electrons towards the sense planes. As described in the next paragraph, the sense wires collect ionized electrons, producing an electronic signal. That electronic signal propagates in both directions along the sense wire and away from the location of the ionization. The signal's propagation is measured by a pair of coupled timers at either end of the wire. It is common to have multiple sets of layers which are oriented differently, though still perpendicularly to the direction of motion, to increase the accuracy of the reconstructed trajectories.\autocite{sauli77} |
− | The general principle of producing an electronic signal in a drift chamber is as follows. A charged particle moves through the drift chamber and ionizes atoms along its path. The electrons, which have been separated from these atoms, are now accelerated towards the anode (sense) | + | The general principle of producing an electronic signal in a drift chamber is as follows. A charged particle moves through the drift chamber and ionizes atoms along its path. The electrons, which have been separated from these atoms, are now accelerated towards the anode wires (sense wires). If the anode wire is very thin, the electric field near to it becomes very strong, causing the electrons to pick up a great deal of speed and cause a Townsend avalanche. A Townsend avalanche is a cascading series of ionizations which help to amplify the signal.\autocite{leo87} This signal being a voltage pulse traveling towards either end of the wire from the point where the electrons hit. Using the coupled timers at the ends, it is then possible to use the difference in time of signal propagation to calculate the position along the wire of the hit. |
This position along the axis of the wire is only one dimensional information about a particle traveling through 3D space. It is, however, possible to couple the timing information from multiple nearby sense wires and a measurement of the time the liberated electrons take to reach the sense wire to calculate the distance of closest approach (DOCA) to each of them. This then gives a position along the axis of each wire as well as a radius perpendicular to that axis at that point. If all of this information is known perfectly, then the vector component of the path which lies in the plane of a circle defined by the aforementioned radius will be tangent to that circle. Combining that information with the change in hit position along the axis of each wire allows for the ultimate measurement of the particle's path through the detector. | This position along the axis of the wire is only one dimensional information about a particle traveling through 3D space. It is, however, possible to couple the timing information from multiple nearby sense wires and a measurement of the time the liberated electrons take to reach the sense wire to calculate the distance of closest approach (DOCA) to each of them. This then gives a position along the axis of each wire as well as a radius perpendicular to that axis at that point. If all of this information is known perfectly, then the vector component of the path which lies in the plane of a circle defined by the aforementioned radius will be tangent to that circle. Combining that information with the change in hit position along the axis of each wire allows for the ultimate measurement of the particle's path through the detector. | ||
===Addition of a Magnetic Field=== | ===Addition of a Magnetic Field=== | ||
− | The inclusion of a magnetic field into a drift chamber allows the reconstruction of not just the path of the particle, but also the magnitude of its momentum. A uniform magnetic field perpendicular to the particle's direction of motion, for example, would cause the path to bend into some section of a circle, thus changing the expected hit position along the wires of the sense planes. Using these hits, it is then possible to reconstruct the radius of curvature of the path. With the assumption the particle carries the usual elementary charge, it is then possible to calculate the particle's momentum by equating the central force and magnetic force as shown in equation \ref{eq:momentumFromForces}. | + | The inclusion of a magnetic field into a drift chamber allows for the reconstruction of not just the path of the particle, but also the magnitude of its momentum. A uniform magnetic field perpendicular to the particle's direction of motion, for example, would cause the path to bend into some section of a circle, thus changing the expected hit position along the wires of the sense planes. Using these hits, it is then possible to reconstruct the radius of curvature of the path. With the assumption the particle carries the usual elementary charge, it is then possible to calculate the particle's momentum by equating the central force and magnetic force as shown in equation \ref{eq:momentumFromForces}. |
\begin{equation} | \begin{equation} | ||
Line 43: | Line 41: | ||
==CLAS12== | ==CLAS12== | ||
− | CLAS12's drift chambers are the primary source of path information for interaction reconstruction within the entire detector suite | + | CLAS12's drift chambers are the primary source of path information for interaction reconstruction within the entire detector suite and are the focus of this document. As such, it is important to make as clear as possible their construction. |
===CLAS12's Drift Chamber Construction=== | ===CLAS12's Drift Chamber Construction=== | ||
− | The CLAS12 Detector's drift chamber is broken up into 18 separate sub-chambers as shown in figure \ref{fig:regionsAndSectors}. There are six chambers in each of three regions which are approximately 2.1, 3.3, and 4.5 meters away from the physics target position. Within each region, the six triangular chambers are arranged to be in symmetric sectors around the beam axis with angular coverage between 5\textdegree{} and | + | The CLAS12 Detector's drift chamber is broken up into 18 separate sub-chambers as shown in figure \ref{fig:regionsAndSectors}. There are six chambers in each of three regions which are approximately 2.1, 3.3, and 4.5 meters away from the physics target position. Within each region, the six triangular chambers are arranged to be in symmetric sectors around the beam axis with angular coverage between 5\textdegree{} and 40\textdegree{} as measured at the target location with respect to the beam axis. The design of the detector allows for a momentum resolution of less than one percent.\autocite{dcSpecSheet} |
− | + | \begin{figure} | |
+ | \includegraphics[width=\textwidth]{./Img/DAF-19-03-24-DC-RegionsAndSectors.png} | ||
+ | \caption{The three regions and six sectors per region in the drift chambers.\autocite{clas12DriftChambers}} | ||
+ | \label{fig:regionsAndSectors} | ||
+ | \end{figure} | ||
− | Within each region and sector are two superlayers of wire planes which are arranged such that the axes of the wires lie in parallel planes | + | Within each region and sector are two superlayers of wire planes which are arranged such that the axes of the wires lie in parallel planes that are rotated by 12\textdegree{} with respect to each other. Within each superlayer are 6 layers of sense wires. Each layer has 112 sense wires which have a hexagonal arrangement of cathode wires around them. These hexagonal arrangements are referred to as cells They grow in size from region 1 to region 3 and provide a spatial resolution of approximately 300\textmu{}m.\autocite{dcSpecSheet} |
===Magnetic Fields=== | ===Magnetic Fields=== | ||
− | The CLAS12 detector has two magnetic fields, one solenoidal field at the target and one toroidal field centered around the second region. The solenoidal field points primarily in the direction of the beamline with a central value of 5 Tesla, and is designed to | + | The CLAS12 detector has two magnetic fields, one solenoidal field at the target and one toroidal field centered around the second drift chamber region. The solenoidal field points primarily in the direction of the beamline with a central value of 5 Tesla, and is designed to prevent Moller electrons, a primary background, from entering the detector.\autocite{solenoidSpecSheet} The toroidal field is actually the result of six smaller coils, one in each sector, which all contribute to create a field that is primarily in the phi (azimuthal about the beam line) direction.\autocite{toroidSpecSheet} This is designed to bend particles either into or away from the beam line through the drift chambers which allows for the reconstruction of the particle's momentum. See figure \ref{fig:magneticFields}. |
− | + | \begin{figure} | |
+ | \includegraphics[width=\textwidth]{./Img/DAF-19-03-24-DC-MagneticFieldMagnitude.png} | ||
+ | \caption{The strength of both the solenoidal (left) and toroidal (center) magnetic fields.} | ||
+ | \label{fig:magneticFields} | ||
+ | \end{figure} | ||
+ | |||
+ | ==Track Reconstruction Elements== | ||
+ | This section describes the theory behind and implementation of several of the more opaque components of the reconstruction algorithms. | ||
===Coordinate Systems=== | ===Coordinate Systems=== | ||
The collaboration has defined two coordinate systems in addition to the lab system to reduce the complexity of the reconstruction process within the drift chambers. The lab coordinate system is defined by a right-handed system such that the positive y-direction is upwards, against the pull of gravity and the positve z-direction is down the beam line. This results in the positive x-direction bisecting what is known as sector one as shown in figure \ref{fig:labCoordinateSystem}. | The collaboration has defined two coordinate systems in addition to the lab system to reduce the complexity of the reconstruction process within the drift chambers. The lab coordinate system is defined by a right-handed system such that the positive y-direction is upwards, against the pull of gravity and the positve z-direction is down the beam line. This results in the positive x-direction bisecting what is known as sector one as shown in figure \ref{fig:labCoordinateSystem}. | ||
− | + | \begin{figure} | |
+ | \includegraphics[width=\textwidth]{./Img/DAF-19-03-24-DC-LabCoordinateSystem.png} | ||
+ | \caption{The lab coordinate system and sectors as seen when looking down the beam line.} | ||
+ | \label{fig:labCoordinateSystem} | ||
+ | \end{figure} | ||
The sector coordinate system is defined by rotating the sector of interest into the position of sector one. This rotation will naturally be some multiple of 60\textdegree{} and will only take into account $\pm{}30$\textdegree about the 0\textdegree angle. The tilted sector coordinate system takes a sector coordinate system and rotates it about the y-axis by 25\textdegree{} such that the z-direction is now perpendicular to the plane of the drift chambers. Transformations between these different coordinate systems are typically handled by rotation matrices. The matrices \ref{mat:rotx}, \ref{mat:roty}, and \ref{mat:rotz} operate on vectors to rotate about the x, y, and z axes respectively. \autocite{arfken13} | The sector coordinate system is defined by rotating the sector of interest into the position of sector one. This rotation will naturally be some multiple of 60\textdegree{} and will only take into account $\pm{}30$\textdegree about the 0\textdegree angle. The tilted sector coordinate system takes a sector coordinate system and rotates it about the y-axis by 25\textdegree{} such that the z-direction is now perpendicular to the plane of the drift chambers. Transformations between these different coordinate systems are typically handled by rotation matrices. The matrices \ref{mat:rotx}, \ref{mat:roty}, and \ref{mat:rotz} operate on vectors to rotate about the x, y, and z axes respectively. \autocite{arfken13} | ||
Line 91: | Line 104: | ||
It follows then that to transform from the tilted sector coordinate system to the sector coordinate system, one would use matrix \ref{mat:roty} with $\theta=25\degree{}$. Likewise, to transform from the sector coordinate system for sector n to the lab coordinate system, one would use matrix \ref{mat:rotz} with $\theta=(n-1)30\degree{}$. | It follows then that to transform from the tilted sector coordinate system to the sector coordinate system, one would use matrix \ref{mat:roty} with $\theta=25\degree{}$. Likewise, to transform from the sector coordinate system for sector n to the lab coordinate system, one would use matrix \ref{mat:rotz} with $\theta=(n-1)30\degree{}$. | ||
− | ==Runge-Kutta Approximation== | + | ===Runge-Kutta Approximation=== |
− | The Runge-Kutta approximations are a family of numerical methods used to approximate solutions to ordinary differential equations. The simplest member of this family is Euler's method as shown in equations \ref{eq:eulerMethod:1} and \ref{eq:eulerMethod:2}. Here | + | The Runge-Kutta approximations are a family of numerical methods used to approximate solutions to ordinary differential equations. The simplest member of this family is Euler's method as shown in equations \ref{eq:eulerMethod:1} and \ref{eq:eulerMethod:2}. Here it is important to see that $f(t,y)=\frac{df}{dt}$ and h is the step size in t. It is straightforward to see that it is possible to achieve an arbitrary level of accuracy by allowing h to be arbitrarily small.\autocite{griffiths10} |
\begin{equation} | \begin{equation} | ||
Line 103: | Line 116: | ||
\end{equation} | \end{equation} | ||
− | However, it is often either not possible or not feasible to allow h to become arbitrarily small. This | + | However, it is often either not possible or not feasible to allow h to become arbitrarily small. This has motivated the use of the higher order Runge-Kutta methods as they are more algorithmically efficient at giving approximations of the same quality. This allows for larger step sizes and thus fewer steps. Of these methods, the 4th order method, often referred to simply as "the Runge-Kutta method," is used in the drift chamber reconstruction process. At its core, the 4th order method is a weighted average of 4 different approximations as seen in equations \ref{eq:rungeKutta:1} through \ref{eq:rungeKutta:6}.\autocite{griffiths10} |
\begin{equation} | \begin{equation} | ||
Line 132: | Line 145: | ||
It is important to note here that all members of the Runge-Kutta family do accumulate errors as they step through more iterations. The error in a single step, called the Local Truncation Error, is of the order $O(h^{5})$, and the accumulated error is of the order $O(h^{4})$.\autocite{griffiths10} | It is important to note here that all members of the Runge-Kutta family do accumulate errors as they step through more iterations. The error in a single step, called the Local Truncation Error, is of the order $O(h^{5})$, and the accumulated error is of the order $O(h^{4})$.\autocite{griffiths10} | ||
− | ==Kalman Fitter== | + | Jefferson Lab's software team has implemented the 4th order algorithm into an object called a Swimmer. This Swimmer is given maps of the magnetic fields and a state vector which incudes the position, momentum, and charge of the particle of interest. The swimmer can then be called upon to swim (step) the particle until it meets certain position criteria such as a particular value along the z-axis. |
− | + | ||
+ | ===Kalman Fitter=== | ||
+ | The Kalman Fitter is is built on the concept of a Kalman Filter. A Kalman Filter is a statistical algorithm which combines a hypothetical model with measurements to find the Best Linear Unbiased Estimate (BLUE) of an evolved system. Included in this section is a description of the Kalman Filter process, a summary of the mathetmatics of it, and a discussion of the implementation used in the Hit Based and Time Based Reconstruction algorithms. | ||
+ | |||
+ | In general, the Kalman Filter Algorithm proceeds as follows. First, the initial state vector of interest and it's associated uncertainties are evolved until there is a measurement available to compare against. The algorithm then compares the predicted and and measured values and takes a weighted average of them wherein more weight is given to values with smaller uncertainties. The state vector is updated and the algorithm repeats. It is worth mentioning that the Kalman Filter is efficient in the sense that there is no need to keep track of old state vectors. There is instead a covariance matrix which is updated every step.\autocite{miller87} | ||
+ | |||
+ | A summary of the mathematics of the Kalman Filter is layed out in equations \ref{eq:kalman:1} through \ref{eq:kalman:16}. It is quoted from An Introduction to Kalman Filtering with Applications by Miller and Leskiw.\autocite{miller87} | ||
+ | |||
+ | Let $\pmb{x}(t)$ be a state vector which satisfies the deterministic ordinary differential system | ||
+ | \begin{equation} | ||
+ | \pmb{\dot{x}}(t)=F(t)\pmb{x}(t) | ||
+ | \label{eq:kalman:1} | ||
+ | \end{equation} | ||
+ | \begin{equation} | ||
+ | \pmb{x}(t_{0})=\pmb{x}_{0} | ||
+ | \label{eq:kalman:2} | ||
+ | \end{equation} | ||
+ | Then in the Kalman theory we have: | ||
+ | |||
+ | \textbf{System model} | ||
+ | \begin{equation} | ||
+ | \pmb{\dot{X}}(t)=F(t)\pmb{X}(t)+\pmb{V}(t) | ||
+ | \label{eq:kalman:3} | ||
+ | \end{equation} | ||
+ | \begin{equation} | ||
+ | \pmb{X}(t_{0})=\pmb{\hat{x}}_{0} | ||
+ | \label{eq:kalman:4} | ||
+ | \end{equation} | ||
+ | where $\pmb{V}(t)$ has mean zero and cross-covariance matrix $S(t,s)=B(t)\Omega(t)B'(t)\delta(t-s)$ where $\Omega(t)$ is a positive definite spectral density matrix and $B(t)$ is a square matrix. The random initial condition $\pmb{\hat{x}}_{0}$ has mean $\pmb{x}_{0}$ and positive definite covariance matrix $P_0$. | ||
+ | |||
+ | \textbf{Measurement model} | ||
+ | \begin{equation} | ||
+ | \pmb{Z}_{k}=H_{k}\pmb{x}(t_{k})+\pmb{v}_{k}, k=1,2,... | ||
+ | \label{eq:kalman:5} | ||
+ | \end{equation} | ||
+ | where $\pmb{v}_{k}$ has mean zero and positive definite covariance matrix $R_{k}$. It is assumed that $\pmb{V}(t)$, $\pmb{\hat{x}}_{0}$, and $\pmb{v}_{k}, k=1,2,...$ are independent. | ||
+ | |||
+ | \textbf{State estimate propagation} | ||
+ | \begin{equation} | ||
+ | \pmb{\dot{\hat{x}}}(t)=F(t)\pmb{\hat{x}}(t), t_{0}\leq{}t<t_{l} | ||
+ | \label{eq:kalman:6} | ||
+ | \end{equation} | ||
+ | \begin{equation} | ||
+ | \pmb{\hat{x}}(t_{0})=\pmb{\hat{x}}_{0} | ||
+ | \label{eq:kalman:7} | ||
+ | \end{equation} | ||
+ | and | ||
+ | \begin{equation} | ||
+ | \pmb{\dot{\hat{x}}}=F(t)\pmb{\hat{x}}(t) | ||
+ | \label{eq:kalman:8} | ||
+ | \end{equation} | ||
+ | has initial condition $\pmb{\hat{x}}(t_{k})$ in $[t_{k}, t_{k+1},...), k=1,2,...$ | ||
+ | |||
+ | \textbf{Error covariance propagation} | ||
+ | \begin{equation} | ||
+ | \dot{P}(t)=F(t)P(t)+P(t)F'(t)+B(t)\Omega(t)B'(t), t_{0}\leq{}t<t_{l} | ||
+ | \label{eq:kalman:9} | ||
+ | \end{equation} | ||
+ | \begin{equation} | ||
+ | P(t_{0})=P_{0} | ||
+ | \label{eq:kalman:10} | ||
+ | \end{equation} | ||
+ | and | ||
+ | \begin{equation} | ||
+ | \dot{P}(t)=F(t)P(t)+P(t)F'(t)+B(t)\Omega(t)B'(t) | ||
+ | \label{eq:kalman:11} | ||
+ | \end{equation} | ||
+ | has initial condition $P(t_{k})$ in $[t_{k}, t_{k+1},...), k=1,2,...$ | ||
+ | |||
+ | \textbf{State estimate update} | ||
+ | \begin{equation} | ||
+ | \pmb{\hat{x}}(t_{k})=(I-K_{k}H_{k})\pmb{\hat{x}}(t^{-}_{k})+K_{k}\pmb{Z}_{k} | ||
+ | \label{eq:kalman:12} | ||
+ | \end{equation} | ||
+ | \begin{equation} | ||
+ | =P(t_{k})[P^{-1}(t^{-}_{k})\pmb{\hat{x}}(t^{-}_{k})+{H'}_{k}{R_{k}}^{-1}\pmb{Z}_{k}], k=1,2,... | ||
+ | \label{eq:kalman:13} | ||
+ | \end{equation} | ||
+ | |||
+ | \textbf{Error covariance update} | ||
+ | \begin{equation} | ||
+ | P(t_{k})=[P^{-1}(t^{-}_{k})+{H'}_{k}{R^{-1}}_{k}H_{k}]^{-1} | ||
+ | \label{eq:kalman:14} | ||
+ | \end{equation} | ||
+ | \begin{equation} | ||
+ | =(I-K_{k}H_{k})P(t^{-}_{k}), k=1,2,... | ||
+ | \label{eq:kalman:15} | ||
+ | \end{equation} | ||
+ | |||
+ | \textbf{Gain matrix} | ||
+ | \begin{equation} | ||
+ | K_{k}=P(t_{k}){H'}_{k}{R^{-1}}_{k}, k=1,2,... | ||
+ | \label{eq:kalman:16} | ||
+ | \end{equation} | ||
+ | |||
+ | \bigskip | ||
+ | This is implemented in the Fitter as a method of refining a trajectory provided by calculations done on the raw hits. In essence, this method takes the initial state vector and steps it to each drift chamber region using the Runge-Kutta based Swimmer discussed in the previous section. At each region, the Kalman Filter is used to refine the trajectory. After finishing with the third region, the algorithm then steps back to near the starting location of the inital state vector and repeates the process up to 30 times. The algorithm will exit early if the \textchi{}\textsuperscript{2} is sufficiently small. | ||
+ | |||
+ | ==Rastered Beam== | ||
+ | A rastered beam is the intentional movement of the incident electrons, ideally translating without rotation, over the surface of the target. The primary purpose of this is to distribute the energy deposited into the target more uniformly and avoid localized target heating that would result in physical changes to the target. In the case of a cryogenic target, overheating can cause localized target boiling, reducing the effective length of the target, and result in a lower luminosity of the experiment. In the case of a polarized target, overheating can impact the ability of the target to be polarized resulting in a need to recover target polarization through annealing. | ||
+ | |||
+ | As already stated, the ideal case is a raster that purely translates the incident electrons in the plane perpendicular to the electron momentum without any rotation. The underlying assumption used to determine the kinematics of an observed scattering event is that the incident electrons were directed parallel to a coordinate axis. In the CLAS12, the reference coordinate system aligns the z-axis such that it points parallel to the incident electrons momentum. The scattered electron's azimuthal angle with respect to this axis is used to quantify kinematic variables in the reaction and the cross section. Most notably, this angle is used to determine the amount of four-momentum that was transferred to the target. It follows that an angular offset in the incident electron beam would be an error that propagates to the measured kinematics and cross-sections. | ||
+ | |||
+ | Unfortunately, it is nigh impossible to be able to achieve pure translation in a real lab setting. It is preferable then to simply minimize the changes in incident angle to such a degree that the contribution to systematic errors are minimal. In the common case of using a dipole magnet to create raster behavior, it is sufficient that the angular size of the target relative to the magnets position be quite small. This ensures that any particle that hits the target will also have a small relative change in angle compared to any other particle. | ||
+ | |||
+ | =Simulation and Reconstruction= | ||
+ | This chapter describes the steps taken to determine the efficacy of the proposed reconstruction algorithm. Included are a description of the physical setup of the actual polarized SIDIS dual target, a description of the simulations done to gather test data, and a discussion of both the current and proposed reconstruction algorithms. The ultimate goal of the proposed reconstruction algorithm is to be able to confidently state which of the two subtargets any given event spawned from. | ||
+ | |||
+ | ==Physical Setup== | ||
+ | Figure \ref{fig:forestTarget} provides a general description of the target used in the simulation. In the simulation, the setup is centered in the middle of the solenoid magnet with a gap of four centimeters between the two targets. Each target is two centimeters long and has a radius of one centimeter. Each of the targets can be independently poalarized and may contain either NH\textsubscript{3} or ND\textsubscript{3}. | ||
+ | |||
+ | \begin{figure} | ||
+ | \includegraphics[width=\textwidth]{./Img/TF_RGC_DualPoltTargFig_11-16-2017.png} | ||
+ | \caption{A description of the general geometry of the polarized dual target.\autocite{forestTarget}} | ||
+ | \label{fig:forestTarget} | ||
+ | \end{figure} | ||
+ | |||
+ | ==Simulation== | ||
+ | All of the data used to analyze the efficacy of the proposed reconstruction algorithm was generated using GEMC version 4a.2.4. GEMC\autocite{gemc} (GEant4 Monte-Carlo) takes in configuration files called GCards. These GCards direct the program to load in certain tagged releases of the files (geometry, magnetic fields, etc) required to simulate the desired experiment. The CLAS12 software team has created a repository containing various different revisions of tags which can be used.\autocite{clas12Tags} A sample GCard can be found in Appendix A. | ||
+ | |||
+ | Additionally, the GCards allow the user to set certain parameters within the simulation. Of particular importance are scaling the magnetic fields and specifying the manner in which the simulated particles are generated. The magnetic field scales are set by simply passing in the desired scale factor. The event generator is somewhat more complicated. It is broken into four different components: beam momentum, momentum spread, beam position, and position spread. The beam momentum has four parameters: simulated particle, momentum magnitude, momentum polar angle, and momentum azimuthal angle. The momentum spread has three parameters: plus or minus momentum spread, plus or minus polar angle, and plus or minus azimuthal angle. The beam position has three parameters: x-position, y-position, and z-position. the position spread has two parameters: spread radius in the x-y plane and z-position spread. A full description of all available options can be found at https://gemc.jlab.org/gemc/html/documentation/options.html. | ||
+ | |||
+ | It should be noted here that the parameters for the momentum were provided by Dr. Tony Forest\autocite{forestTarget} and that they are indicative of the expected kinematic range of the actual experiment. | ||
+ | |||
+ | \begin{table}[!ht] | ||
+ | \centering | ||
+ | \begin{tabular}{|c|c|c|c|c|} | ||
+ | \hline | ||
+ | & e- In Up & e- In Down & e- Out Up & e- Out Down \\ | ||
+ | \hline | ||
+ | Toroid Scale & -1.0 & -1.0 & 1.0 & 1.0 \\ | ||
+ | \hline | ||
+ | $|P|$ & $4.75\pm{}3.25GeV$ & $4.75\pm{}3.25GeV$ & $4.75\pm{}3.25GeV$ & $4.75\pm{}3.25GeV$ \\ | ||
+ | \hline | ||
+ | $\phi$ & $22.5\pm{}17.5\degree$ & $22.5\pm{}17.5\degree$ & $22.5\pm{}17.5\degree$ & $22.5\pm{}17.5\degree$\\ | ||
+ | \hline | ||
+ | $\theta$ & $0.0\pm{}180.0\degree$ & $0.0\pm{}180.0\degree$ & $0.0\pm{}180.0\degree$ & $0.0\pm{}180.0\degree$\\ | ||
+ | \hline | ||
+ | $X$ & $0.0cm$ & $0.0cm$ & $0.0cm$ & $0.0cm$ \\ | ||
+ | \hline | ||
+ | $Y$ & $0.0cm$ & $0.0cm$ & $0.0cm$ & $0.0cm$ \\ | ||
+ | \hline | ||
+ | $R$ & $1.0cm$ & $1.0cm$ & $1.0cm$ & $1.0cm$ \\ | ||
+ | \hline | ||
+ | $Z$ & $-3.0\pm{}1.0cm$ & $3.0\pm{}1.0cm$ & $-3.0\pm{}1.0cm$ & $3.0\pm{}1.0cm$ \\ | ||
+ | \hline | ||
+ | \hline | ||
+ | & \textpi{}+ Up & \textpi{}+ Down & \textpi{}- Up & \textpi{}- Down \\ | ||
+ | \hline | ||
+ | Toroid Scale & 1.0 & 1.0 & -1.0 & -1.0 \\ | ||
+ | \hline | ||
+ | $|P|$ & $1.8\pm{}1.0GeV$ & $1.8\pm{}1.0GeV$ & $1.8\pm{}1.0GeV$ & $1.8\pm{}1.0GeV$ \\ | ||
+ | \hline | ||
+ | $\phi$ & $45.0\pm{}45.0\degree$ & $45.0\pm{}45.0\degree$ & $45.0\pm{}45.0\degree$ & $45.0\pm{}45.0\degree$\\ | ||
+ | \hline | ||
+ | $\theta$ & $0.0\pm{}180.0\degree$ & $0.0\pm{}180.0\degree$ & $0.0\pm{}180.0\degree$ & $0.0\pm{}180.0\degree$\\ | ||
+ | \hline | ||
+ | $X$ & $0.0cm$ & $0.0cm$ & $0.0cm$ & $0.0cm$ \\ | ||
+ | \hline | ||
+ | $Y$ & $0.0cm$ & $0.0cm$ & $0.0cm$ & $0.0cm$ \\ | ||
+ | \hline | ||
+ | $R$ & $1.0cm$ & $1.0cm$ & $1.0cm$ & $1.0cm$ \\ | ||
+ | \hline | ||
+ | $Z$ & $-3.0\pm{}1.0cm$ & $3.0\pm{}1.0cm$ & $-3.0\pm{}1.0cm$ & $3.0\pm{}1.0cm$ \\ | ||
+ | \hline | ||
+ | \end{tabular} | ||
+ | \caption{Descriptions of the non-default parameters used generate the eight data sets. Inbending and outbending are shortened to In and Out respectively. Upstream and Downstream are shortened to Up and Down respectively.} | ||
+ | \label{tab:gemcParams} | ||
+ | \end{table} | ||
+ | |||
+ | Four pairs of data sets were generated for this analysis. Each pair consists of one 5000 event data set generated uniformly within each subtarget and was later combined to make a single 10000 event data set. The four ultimate data sets are labled as such: electron-inbending, electron-outbending, postive pion, and negative pion. The two electron set parameters differ only by the scale of the toroid. In one case it is set such that electrons bend into the beam line and in the other case they bend away. In the case of the pions, the toroid scale is set such that particles always bend into the beamline. Table \ref{tab:gemcParams} shows all of the non-default parameters used to generate each data set. | ||
+ | |||
+ | After GEMC has been run, the resulting EVIO (Event Input Output) files must be converted to HIPO (High Performance Output) files. This is done by calling one of the utilities packaged in clas12-offline-software, evio2hipo. Once that is done, the upstream and downstream components of each pair of data sets are merged using another clas12-offline-software program, hipo-utils. Examples for running all of these can be found in Appendix A. All that remains then is to run the HIPO files through the two pre-existing reconstruction algorithms and then the proposed algorithm. | ||
+ | |||
+ | ==Current Reconstruction Algorithms== | ||
+ | There are currently two drift chamber reconstruction algorithms. The first, called Hit Based, feeds into the second, called Time Based. The following paragraphs are brief descriptions of both of them. Appendix B holds lengthy descriptions of both algorithms. | ||
+ | |||
+ | The Hit Based algorithm is the first pass at reconstruction and is largely used to lay groundwork for the Time Based algorithm. In broad strokes, the Hit Based algorithm works as by first collecting all of the drift chamber hits and grouping them by their superlayer. Those groups are then processed and a linear fit is attempted. That fit is then used to find a position-momentum state vector between the superlayers of each region. Finally, a combination of a Kalman Fitter and the Runge-Kutta based Swimmer are used to reconstructed the trajectory of the particle and, thus, the interaction vertex. | ||
+ | |||
+ | The Time Based algorithm works by taking the tracks already reconstructed by the Hit Based reconstruction and refining them further. First, the groups of hits are regathered and some of the likely-extraneous hits are culled using timing information. Those groups then undergo another test to check whether or not the entire group should be thrown out or not. The Kalman-Fitter and Swimmer are then used again to find the new trajectories and interaction vertices. | ||
+ | |||
+ | Both of these reconstruction algorithms are run on each HIPO file by calling the notsouseful-util program that is packaged as a part of clas12-offline-software. Once again, an example of of the command to call the program can be found in Appendix A. | ||
+ | |||
+ | ==Proposed Reconstruction Algorithm== | ||
+ | The Raster Based algorithm takes the output of both the Time and Hit Based algorithms and attempts to fit them to an off-axis beam position. This is achieved by performing a grid search through a small region of momentum space about a state vector provided by older algorithms. The actual vertex is found by swimming all of these slightly varied state vectors to the beam line and calculating the DOCA. Please note that the source code for this is found in its entirety in Appendix C. | ||
+ | |||
+ | This program currently only exists in a fork off of Jefferson Lab's official GitHub repository. The address for this is https://github.com/friadavi/clas12-offline-software. Should the reader wish to run the program, they may run the fritracking program which is packaged with the rest of the clas12-offline-software in the aforementioned fork. An example of calling the program may be found in Appendix A. | ||
+ | |||
+ | For this analysis, the data sets reconstructed by the Hit and Time based algorithms were reconstructed multiple times while the two grid search parameters, span and samples, were varied independently. The span was varied, inclusively, between values of 0.001 and 0.015 (0.1 and 1.5\%). The number of samples was varied, again inclusively, between 3 and 11. This allows for an analysis of the reconstruction as a function of the two independently. | ||
+ | |||
+ | ===Parameters=== | ||
+ | This algorithm takes a variety of input parameters which control both the extent and granularity of the two searches involved as well as various other variables. These parameters are listed and explained below. | ||
+ | |||
+ | \begin{description} | ||
+ | \item[Input]The name and path to the input file to reconstruct. | ||
+ | \item[Output]The name and path of the output file. Any preexisting file will be overwritten. | ||
+ | \item[Engine]The output from which engine to reconstruct. The options at present are Hit Based and Time Based. In the event that this parameter is not specified, the algortihm will preferentially take the Time Based if available, and Time Based if not. | ||
+ | \item[Solenoid]The scale factor of the magnetic field generated by the solenoid. | ||
+ | \item[Toroid]The scale factor of the magnetic field generated by the toroid. | ||
+ | \item[Grid Search Span]The span in each Cartesian dimension in momentum space about which the grid search will take place. This value is provided as a percentage such that the span of the search is $p_{x,y,z}(1\pm{}value)$. | ||
+ | \item[Grid Search Samples]The number of evenly spaced sample points to take in each momentum dimension. This includes the edges of the span, as such this number should never be less than two. It is recommended to always use an odd number, that way the starting value is included in the search. | ||
+ | \item[Vertex Search Upper Bound]The upper bound in the z-direction in which to search for the DOCA to the beam position. | ||
+ | \item[Vertex Search Lower Bound]The lower bound in the z-direction in which to search for the DOCA to the beam position. | ||
+ | \item[Vertex Samples]The number of evenly spaced sample points to take in each iteration of the vertex search. This includes upper and lower bounds, as such this number should not be less than two. | ||
+ | \item[Vertex Iterations]The number of recursive iterations to go through in the vertex search. | ||
+ | \item[Threads]The maximum number of threads that can be used in the reconstruction. | ||
+ | \end{description} | ||
+ | |||
+ | ===Overarching Algorithm=== | ||
+ | The algorithm begins by parsing commandline options; it stop the reconstruction if any errors occur. After that, the magnetic field maps are initialized and scaled if necessary. Then, for each event within the input file, the reconstruction algorithm is run and the output, if any, is written to the output file. | ||
+ | |||
+ | ===Reconstruction=== | ||
+ | This begins by reading in whichever output banks are required, as dictated by the Engine commandline parameter. Then the beam position is read in from the simulation bank. If the event is real data, then the beam position is assumed to be at $x=0, y=0$. There is currently an assumed radius of error of 0.5mm in the beam position. The algorithm then collects a state vector output by the Hit Based and Time Based algorithm. This state vector's position is just upstream of the region 1 drift chamber and will be the starting point of the search. The grid search is run and the output is written to the data banks. | ||
+ | |||
+ | ===Grid Search=== | ||
+ | This sub-algorithm starts by computing the upper and lower bounds of the momentum space in which to search. Arrays to store the output of each search point are then created and a ThreadPool is established to allow for the multithreading of the vertex searches for each grid search sample point. A vertex search job thread is then submitted for every sample point in the three dimensional momentum search space. For the Grid Search Samples value n, there are $n^{3}$ sample points, each evenly spaced in all three dimensions. | ||
+ | |||
+ | After all submitted jobs have been completed, the algorithm searches through all of the outputs and identifies interaction vertices which are either within the beam line's margin of error or a local minimum. All other points are culled. Any points whose interaction vertex z value is outside of the vertex search bounds is culled. Finally, the algorithm searches through all surviving interaction vertices and selects for output the one whose initial starting momentum deviated the least from the Hit Based or Time Based momentum. | ||
+ | |||
+ | ===Vertex Search=== | ||
+ | This sub-algorithm begins by swimming from the state vector provided by the grid search to each sample z value within the search region. The samples points are evenly spaced throughout the region and include the boundaries. The swim outputs are collected, and the DOCA for each one is calculated. All local minima amongst the DOCAs are added to a list. If this is the last iteration, the algorithm returns the output which corresponds to the smallest DOCA, failing gracefully if no swims performed successfully. | ||
+ | |||
+ | In the event that there are more iterations to go, this algorithm recursively calls itself on each local minima. The bounds of the recursive search search are as follows: if the minimum is at the a boundary of the search region it is recentered on the boundary without changing the actual distance spanned and the function is called without decrementing the iteration count. Otherwise, the search region is recentered on the point associated with the local minimum and the boundaries are shrunk to coincide with the nearest neighbor sample points. The iteration count is decremented and the function is called. The output of each new search is collected and the one with the smallest DOCA is returned. | ||
+ | |||
+ | =Analysis= | ||
+ | Before moving into the analysis proper, it is worth summarizing what exactly is being analyzed and to what end. Four data sets, each initially of 10,000 events, have been generated and run through the Hit Based and Time Based reconstruction algorithms. The ouputs of the old reconstructions were then fed into the proposed reconstruction multiple times while varying the number of samples used and the span of the search. These data sets consist of events that were generated within the volume of the subtargets of the polarized SIDIS dual target. Two of the data sets consist of electrons, one with an inbending toroid configuration and one with an outbending configuration. The other two data sets are \textpi{}+ and \textpi{}- particles, each with a toroid configuration which will cause them to bend into the beam line. The goal is to be able to reconstruct the interaction vertex of each particle well enough that one could say with confidence which of the two subtargets it was generated in. | ||
+ | |||
+ | The analysis will be presented as follows: for each of the four data sets, graphs formed from the one Raster Based reconstruction output which is included in both the set generated by varying the span and the set generated by varying the number of samples will be offered forth as being indicative of the general behavior of their data set. The values for the span and number of samples that were used to produce these graphs are 0.005 (0.5\%) and seven, respectively. These are also the values at which one parameter will be held constant while the other is varied. After that a number of graphs will be presented which indicate trends, or the lack thereof, amongst the varied parameters. | ||
+ | |||
+ | \section{e- Inbending} | ||
+ | To begin, examine figures \ref{fig:e-InDeltaXHist} and \ref{fig:e-InDeltaYHist}. There is clearly a marked improvement in the quality of reconstruction in both the x and y directions. Figure \ref{fig:e-InDeltaZHist} shows an improvement in the reconstruction in the z direction, though decidedly less impressive than the x and y case. Of note here is the \textsigma of the gaussian fit to the Raster Based reconstruction. Its value is approximately $\frac{1}{5}$ of the 4cm distance between the two subtargets. This suggests that this particular set of parameters puts the reconstruction right on the cusp of 5\textsigma confidence in being able to tell from which subtarget particles were generated. | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/7-/S-T-electron7-0.005DeltaXHist}.png} | ||
+ | \caption{The difference between the generated and reconstructed x positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for inbending electrons} | ||
+ | \label{fig:e-InDeltaXHist} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/7-/S-T-electron7-0.005DeltaYHist}.png} | ||
+ | \caption{The difference between the generated and reconstructed y positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for inbending electrons} | ||
+ | \label{fig:e-InDeltaYHist} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/7-/S-T-electron7-0.005DeltaZHist}.png} | ||
+ | \caption{The difference between the generated and reconstructed z positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for inbending electrons} | ||
+ | \label{fig:e-InDeltaZHist} | ||
+ | \end{figure} | ||
+ | |||
+ | Also worthy of note here is the difference in counts for each algorithm. It would seem that, for the distribution of particles generated, the HB/TB reconstruction is approximately 75\% efficient and the Raster Based reconstruction is approximately 66\% efficient on top of that. This leads to an ultimate efficiency of approximately 50\%. | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/S-T-electron7-0.005SampleDeltaZ}.png} | ||
+ | \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the number of grid search samples for inbending electrons} | ||
+ | \label{fig:e-InDeltaZSamplesMeta} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/S-T-electron7-0.005SampleEfficiency}.png} | ||
+ | \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the number of grid search samples for inbending electrons} | ||
+ | \label{fig:e-InSamplesEff} | ||
+ | \end{figure} | ||
+ | |||
+ | Figure \ref{fig:e-InDeltaZSamplesMeta} shows that the number of samples has very little effect on the reconstruction while figure \ref{fig:e-InSamplesEff} shows that increasing the number of samples does tend to increase the reconstruction efficiency by a small amount. | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/S-T-electron7-0.005SpanDeltaZ}.png} | ||
+ | \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the grid search span for inbending electrons} | ||
+ | \label{fig:e-InDeltaZSpanMeta} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/S-T-electron7-0.005SpanEfficiency}.png} | ||
+ | \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the grid search span for inbending electrons} | ||
+ | \label{fig:e-InSpanEff} | ||
+ | \end{figure} | ||
+ | |||
+ | Figure \ref{fig:e-InDeltaZSpanMeta} shows that there is a general trend towards a larger \textsigma{} as the search span is increased. Additionally, figure \ref{fig:e-InSpanEff} shows that the efficiency also trends generally positive as the span increases. A potential reason for both of these will be explored in the conclusion. | ||
+ | |||
+ | \section{e- Outbending} | ||
+ | Comparing figures \ref{fig:e-OutDeltaXHist} through \ref{fig:e-OutSpanEff} with their corresponding figures \ref{fig:e-InDeltaXHist} through \ref{fig:e-InSpanEff} will show largely identical results, albeit with the outbending situation having a generally higher efficiency. | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/7-/S-T+electron7-0.005DeltaXHist}.png} | ||
+ | \caption{The difference between the generated and reconstructed x positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for outbending electrons} | ||
+ | \label{fig:e-OutDeltaXHist} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/7-/S-T+electron7-0.005DeltaYHist}.png} | ||
+ | \caption{The difference between the generated and reconstructed y positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for outbending electrons} | ||
+ | \label{fig:e-OutDeltaYHist} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/7-/S-T+electron7-0.005DeltaZHist}.png} | ||
+ | \caption{The difference between the generated and reconstructed z positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for outbending electrons} | ||
+ | \label{fig:e-OutDeltaZHist} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/S-T+electron7-0.005SampleDeltaZ}.png} | ||
+ | \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the number of grid search samples for outbending electrons} | ||
+ | \label{fig:e-OutDeltaZSamplesMeta} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/S-T+electron7-0.005SampleEfficiency}.png} | ||
+ | \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the number of grid search samples for outbending electrons} | ||
+ | \label{fig:e-OutSamplesEff} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/S-T+electron7-0.005SpanDeltaZ}.png} | ||
+ | \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the grid search span for outbending electrons} | ||
+ | \label{fig:e-OutDeltaZSpanMeta} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/S-T+electron7-0.005SpanEfficiency}.png} | ||
+ | \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the grid search span for outbending electrons} | ||
+ | \label{fig:e-OutSpanEff} | ||
+ | \end{figure} | ||
+ | |||
+ | \section{\textpi{}+} | ||
+ | Before looking at any fits, one should first turn their attention to the greatly reduced number of events. This is mainly due to reconstruction failures in the HB/TB algorithms as the efficiency of the Raster Based algorithm is not sufficiently worse to account for the change. The analysis in this section should be taken with a grain of salt due to these reduced statistics. Figures \ref{fig:pi+OutDeltaXHist} and \ref{fig:pi+OutDeltaYHist} show that the resolution in the x and y directions are largely unchanged from the electron reconstructions. Unfortunately, the \textsigma{} of the fit in figure \ref{fig:pi+OutDeltaZHist} has gotten larger and is now solidly between $\frac{1}{5}$ and $\frac{2}{5}$ of the distance between the two subtargets, thus reducing the confidence in stating which target the event came from. | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/7-/S-T+piPlus7-0.005DeltaXHist}.png} | ||
+ | \caption{The difference between the generated and reconstructed x positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for \textpi{}+'s} | ||
+ | \label{fig:pi+OutDeltaXHist} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/7-/S-T+piPlus7-0.005DeltaYHist}.png} | ||
+ | \caption{The difference between the generated and reconstructed y positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for \textpi{}+'s} | ||
+ | \label{fig:pi+OutDeltaYHist} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/7-/S-T+piPlus7-0.005DeltaZHist}.png} | ||
+ | \caption{The difference between the generated and reconstructed z positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for \textpi{}+'s} | ||
+ | \label{fig:pi+OutDeltaZHist} | ||
+ | \end{figure} | ||
+ | |||
+ | Once again, as per figures \ref{fig:pi+OutDeltaZSamplesMeta} and \ref{fig:pi+OutSamplesEff}, the number of samples taken has little effect on the resolution in the z direction and the efficiency does increase as the number of samples increases. This time, however, span has a lessened effect on the \textsigma of the z direction, as seen in figure \ref{fig:pi+OutDeltaZSpanMeta}. Figure \ref{fig:pi+OutSpanEff} shows that the efficiency increases as the number of samples increases, just as was the case for the electrons. | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/S-T+piPlus7-0.005SampleDeltaZ}.png} | ||
+ | \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the number of grid search samples for \textpi{}+'s} | ||
+ | \label{fig:pi+OutDeltaZSamplesMeta} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/S-T+piPlus7-0.005SampleEfficiency}.png} | ||
+ | \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the number of grid search samples for \textpi{}+'s} | ||
+ | \label{fig:pi+OutSamplesEff} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/S-T+piPlus7-0.005SpanDeltaZ}.png} | ||
+ | \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the grid search span for \textpi{}+'s} | ||
+ | \label{fig:pi+OutDeltaZSpanMeta} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/S-T+piPlus7-0.005SpanEfficiency}.png} | ||
+ | \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the grid search span for \textpi{}+'s} | ||
+ | \label{fig:pi+OutSpanEff} | ||
+ | \end{figure} | ||
+ | |||
+ | \section{\textpi{}-} | ||
+ | Much as the outbending electron reconstructions were largely identical to the inbending reconstructions, the \textpi{}- reconstructions are largely identical to the \textpi{}+ reconstructions. Note that the fit to the Raster Based \textDelta{}X data is clearly not correct. This is certainly an artifact of the small sample size. | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/7-/S-T-piMinus7-0.005DeltaXHist}.png} | ||
+ | \caption{The difference between the generated and reconstructed x positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for \textpi{}+'s} | ||
+ | \label{fig:pi-OutDeltaXHist} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/7-/S-T-piMinus7-0.005DeltaYHist}.png} | ||
+ | \caption{The difference between the generated and reconstructed y positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for \textpi{}+'s} | ||
+ | \label{fig:pi-OutDeltaYHist} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/7-/S-T-piMinus7-0.005DeltaZHist}.png} | ||
+ | \caption{The difference between the generated and reconstructed z positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for \textpi{}+'s} | ||
+ | \label{fig:pi-OutDeltaZHist} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/S-T-piMinus7-0.005SampleDeltaZ}.png} | ||
+ | \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the number of grid search samples for \textpi{}+'s} | ||
+ | \label{fig:pi-OutDeltaZSamplesMeta} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/S-T-piMinus7-0.005SampleEfficiency}.png} | ||
+ | \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the number of grid search samples for \textpi{}+'s} | ||
+ | \label{fig:pi-OutSamplesEff} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/S-T-piMinus7-0.005SpanDeltaZ}.png} | ||
+ | \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the grid search span for \textpi{}+'s} | ||
+ | \label{fig:pi-OutDeltaZSpanMeta} | ||
+ | \end{figure} | ||
+ | |||
+ | \begin{figure}[!ht] | ||
+ | \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/S-T-piMinus7-0.005SpanEfficiency}.png} | ||
+ | \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the grid search span for \textpi{}+'s} | ||
+ | \label{fig:pi-OutSpanEff} | ||
+ | \end{figure} | ||
+ | |||
+ | =Conclusion= | ||
+ | The goal of this project was ultimately to be able to say with confidence from which of the two subtargets within the polarized SIDIS dual target the electrons scattered. As the distance between the two subtargets is four centimeters, it follows that to be able to state with five sigma confidence which of the two an event spawned from, the reconstuction's fit must have a sigma of at most eight millimeters. The analysis in the previous section has shown that there is a set of search parameters which allows for one to state just that with five sigma certainty for electron events. Unfortunately, the best and least efficient case of pion reconstruction can only be claimed with four sigma certainty. | ||
+ | |||
+ | With respect to the relationship between the reconstruction efficiency and the span of the search, it would seem that the Raster Based Algorithm is acting as a filter of sorts. As the span increases, both the sigma of the reconstruction and the efficiency increases. This seems to suggest that broadening the quantity of events which can be successfully reconstructed results in a worse quality of reconstruction. This, in turn, would seem to suggest that the algorithm intrinsically culls out events which are not reconstructed well enough. | ||
+ | |||
+ | =Appendices= | ||
+ | |||
+ | ==Appendix A== | ||
+ | This appendix contains examples of program calls used throughout the reconstruction process and an example GCard. | ||
+ | |||
+ | ===Example GCard=== | ||
+ | Below is an example GCard that could be fed into GEMC. | ||
+ | |||
+ | \begin{lstlisting} | ||
+ | <gcard> | ||
+ | |||
+ | <!-- target. Notice variation give the target type. Can be: lH2, lD2, ND3 | ||
+ | <detector name="experiments/clas12/targets/cad/" factory="CAD"/> | ||
+ | <detector name="experiments/clas12/targets/target" factory="TEXT" variation="cad"/> | ||
+ | --> | ||
+ | <!-- PolTarg --> | ||
+ | <detector name="/jlab/workdir/Thesis/Scripts/detectors/clas12/targets/target" factory="TEXT" variation="PolTarg"/> | ||
+ | <detector name="/jlab/workdir/Thesis/Scripts/detectors/clas12/targets/PolTarg/" factory="CAD"/> | ||
+ | |||
+ | <!-- central detectors --> | ||
+ | <detector name="experiments/clas12/cnd/cnd" factory="TEXT" variation="original"/> | ||
+ | <detector name="experiments/clas12/bst/bst" factory="TEXT" variation="java"/> | ||
+ | <!-- detector name="experiments/clas12/bstShield/bstShield" factory="TEXT" variation="w51" --> | ||
+ | <detector name="experiments/clas12/micromegas/micromegas" factory="TEXT" variation="michel"/> | ||
+ | |||
+ | |||
+ | <!--ctof, cad --> | ||
+ | <detector name="experiments/clas12/ctof/ctof" factory="TEXT" variation="cad"/> | ||
+ | <detector name="experiments/clas12/ctof/javacad/" factory="CAD"/> | ||
+ | |||
+ | <!--high threshold cherenkov --> | ||
+ | <detector name="experiments/clas12/htcc/htcc" factory="TEXT" variation="original"/> | ||
+ | |||
+ | <!-- magnets --> | ||
+ | <detector name="experiments/clas12/magnets/solenoid" factory="TEXT" variation="original"/> | ||
+ | <detector name="experiments/clas12/magnets/cad/" factory="CAD" /> | ||
+ | |||
+ | |||
+ | <!-- Beamline configuration: FT is used --> | ||
+ | <detector name="experiments/clas12/ft/ft" factory="TEXT" variation="FTOn"/> | ||
+ | <detector name="experiments/clas12/beamline/cadBeamline/" factory="CAD"/> | ||
+ | <detector name="experiments/clas12/beamline/vacuumLine/" factory="CAD"/> | ||
+ | <detector name="experiments/clas12/beamline/beamline" factory="TEXT" variation="FTOn"/> | ||
+ | |||
+ | <!-- forward carriage --> | ||
+ | <detector name="experiments/clas12/fc/forwardCarriage" factory="TEXT" variation="TorusSymmetric"/> | ||
+ | |||
+ | <detector name="experiments/clas12/dc/dc" factory="TEXT" variation="java"/> | ||
+ | <detector name="experiments/clas12/ftof/ftof" factory="TEXT" variation="java"/> | ||
+ | <detector name="experiments/clas12/ec/ec" factory="TEXT" variation="java"/> | ||
+ | <detector name="experiments/clas12/pcal/pcal" factory="TEXT" variation="java"/> | ||
+ | <detector name="experiments/clas12/ltcc/ltcc" factory="TEXT" variation="original"/> | ||
+ | <detector name="experiments/clas12/ltcc/cad_cone/" factory="CAD"/> | ||
+ | <detector name="experiments/clas12/ltcc/cad/" factory="CAD"/> | ||
+ | |||
+ | |||
+ | <!-- you can scale the fields here. Remember torus -1 means e- INBENDING --> | ||
+ | <option name="SCALE_FIELD" value="TorusSymmetric, -1.0"/> | ||
+ | <option name="SCALE_FIELD" value="clas12-newSolenoid, -1.0"/> | ||
+ | |||
+ | <!-- hall field --> | ||
+ | <option name="HALL_FIELD" value="clas12-newSolenoid"/> | ||
+ | |||
+ | <!-- fields, precise mode --> | ||
+ | <option name="FIELD_PROPERTIES" value="TorusSymmetric, 2*mm, G4ClassicalRK4, linear"/> | ||
+ | <option name="FIELD_PROPERTIES" value="clas12-newSolenoid, 1*mm, G4ClassicalRK4, linear"/> | ||
+ | |||
+ | |||
+ | <!-- beam conditions --> | ||
+ | <option name="BEAM_P" value="e-, 4.75*GeV, 22.5*deg, 0*deg"/> | ||
+ | <option name="BEAM_V" value="(0.0, 0.0, 3.0)cm"/> | ||
+ | <option name="SPREAD_P" value="3.25*GeV, 17.5*deg, 180*deg"/> | ||
+ | <option name="SPREAD_V" value="(1.0, 1.0)cm"/> | ||
+ | |||
+ | |||
+ | <option name="SAVE_ALL_MOTHERS" value="0"/> | ||
+ | <option name="RECORD_MIRRORS" value="1"/> | ||
+ | |||
+ | <option name="PHYSICS" value="FTFP_BERT + STD + Optical"/> | ||
+ | |||
+ | <option name="OUTPUT" value="evio, out.ev"/> | ||
+ | |||
+ | <!-- Will print message every 10 events --> | ||
+ | <option name="PRINT_EVENT" value="10" /> | ||
+ | |||
+ | |||
+ | <!-- Run Number 11, picked up by digitization routines --> | ||
+ | <option name="RUNNO" value="11" /> | ||
+ | |||
+ | |||
+ | <!-- RF Signal needs event time window defined by LUMI_EVENT. | ||
+ | If Backround is activated make sure to use LUMI_EVENT below instead.--> | ||
+ | <option name="LUMI_EVENT" value="0, 248.5*ns, 4*ns" /> | ||
+ | <option name="RFSETUP" value="0.499, 40, 20" /> | ||
+ | |||
+ | |||
+ | |||
+ | <!-- beam background. for 250 ns timewindow we have 124,000 e- on | ||
+ | a LH2 target at 10^35 luminosity | ||
+ | I suggest in this case to set SAVE_ALL_MOTHERS to 0 | ||
+ | or the many tracks will slow down the simulation a lot | ||
+ | For background studies use field in fast mode: | ||
+ | --> | ||
+ | |||
+ | <!-- | ||
+ | <option name="LUMI_EVENT" value="124000, 248.5*ns, 4*ns" /> | ||
+ | <option name="LUMI_P" value="e-, 10.6*GeV, 0*deg, 0*deg" /> | ||
+ | <option name="LUMI_V" value="(0.0, 0.0, -10)cm" /> | ||
+ | <option name="LUMI_SPREAD_V" value="(0.03, 0.03)cm" /> | ||
+ | --> | ||
+ | |||
+ | <!-- production threshold for passive volumes --> | ||
+ | <!-- beamline shielding: 10cm--> | ||
+ | <option name="PRODUCTIONCUTFORVOLUMES" value="innerShieldAndFlange, outerFlange, outerMount, nut1, nut2, nut3, nut4, nut5, nut6, nut7, nut8, nut9, taggerInnerShield, main-cone, main-cone, adjuster1, adjuster2, adjuster3, DSShieldFrontLead, DSShieldBackLead, DSShieldInnerAss, DSShieldBackAss, DSShieldFrontAss, DSShieldBackCover, DSShieldFrontCover, DSShieldFlangeAttachment, DSShieldFlange, 100" /> | ||
+ | |||
+ | <!-- vacuum line: 10cm--> | ||
+ | <option name="PRODUCTIONCUTFORVOLUMES" value="connectUpstreamToTorusPipe, connectTorusToDownstreamPipe, downstreamPipeFlange, 100" /> | ||
+ | |||
+ | <!-- torus magnet: 10cm--> | ||
+ | <option name="PRODUCTIONCUTFORVOLUMES" value="BoreShield, CenterTube, DownstreamShieldingPlate, DownstreamVacuumJacket, WarmBoreTube, apex, Shield1, Shield2, Shield3, Shield4, Shield5, Shield6, Shield7, shell1a, shell1b, shell2a, shell2b, shell3a, shell3b, shell4a, shell4b, shell5a, shell5b, shell6a, shell6b, 100" /> | ||
+ | |||
+ | |||
+ | <!-- Commented: To remove a detector, set his existance to no. For example to remove the forward micromegas. | ||
+ | <detector name="FMT"> | ||
+ | <existence exist="no" /> | ||
+ | </detector> | ||
+ | --> | ||
+ | |||
+ | |||
+ | </gcard> | ||
+ | \end{lstlisting} | ||
+ | |||
+ | ===Program Calls=== | ||
+ | An example of the gemc program call: | ||
+ | \begin{lstlisting} | ||
+ | bash gemc -USE_GUI=0 -gcard=/some/path/to/input.gcard -N=10000 -DATABASE=/some/path/to/local/database.sqlite -OUTPUT="evio, /some/path/to/output.evio" | ||
+ | \end{lstlisting} | ||
+ | |||
+ | An example of the evio2hipo program call: | ||
+ | \begin{lstlisting} | ||
+ | bash evio2hipo -n 10000 -r 11 -s -1.0 -t -1.0 -o /some/path/to/output.hipo /some/path/to/input.evio | ||
+ | \end{lstlisting} | ||
+ | |||
+ | An example of the hipo-utils program call: | ||
+ | \begin{lstlisting} | ||
+ | bash hipo-utils -merge -o /some/path/to/output.hipo /some/path/to/input1.hipo /some/path/to/input2.hipo | ||
+ | \end{lstlisting} | ||
+ | |||
+ | An example of the notsouseful-util program call: | ||
+ | \begin{lstlisting} | ||
+ | bash notsouseful-util -n 10000 -c 2 -i /some/path/to/input.hipo -o /some/path/to/output.hipo | ||
+ | \end{lstlisting} | ||
+ | |||
+ | An example of the fritracking program call: | ||
+ | \begin{lstlisting} | ||
+ | bash fritracking -s -1.0 -t -1.0 -i /some/path/to/input.hipo -o /some/path/to/output.hipo -x 3 -g 7 0.005 -v 9 2 -z -4.0 4.0 | ||
+ | \end{lstlisting} | ||
− | = | + | ==Appendix B== |
− | This | + | This appendix contains descriptions of the previously existing drift chamber reconstruction algorithms, Hit Based and Time Based. These descriptions are current as of coatjava release 5c.7.4. |
− | ==Definition of Terms== | + | ===Definition of Terms=== |
There are several terms which must be first understood before reading further: | There are several terms which must be first understood before reading further: | ||
\begin{description} | \begin{description} | ||
Line 152: | Line 743: | ||
\end{description} | \end{description} | ||
− | ==Hit Based Algorithm== | + | ===Hit Based Algorithm=== |
The Hit Based algorithm takes the raw drift chamber hit positions, but not any time information beyond that which gives the position along any given wire, and attempts to perform track reconstruction. In general, this is simply used as a step to then perform the Time Based reconstruction. | The Hit Based algorithm takes the raw drift chamber hit positions, but not any time information beyond that which gives the position along any given wire, and attempts to perform track reconstruction. In general, this is simply used as a step to then perform the Time Based reconstruction. | ||
===Hits to Clusters=== | ===Hits to Clusters=== | ||
− | Raw hits are all initially sorted into clumps. Clumps are the rawest form of cluster as they are created by independently taking each superlayer in each sector and adding the hits found therein to the clump. The clump is then processed to remove hits caused by noise. This is determined by whether or not the DOCA for that hit is greater than the size of the cell. The list of clumps is then processed into a list of clusters by removing any clumps which have less than four total hits within them. A linear fit is then performed on the hits which comprise each cluster. The clusters are split into multiple clusters if necessary by means of checking the \textchi{}\textsuperscript{2} of a linear fit. Following that step, the program updates the raw hits with various additional information and stores them as well as the clusters. | + | Raw hits are all initially sorted into clumps. Clumps are the rawest form of a cluster as they are created by independently taking each superlayer in each sector and adding the hits found therein to the clump. The clump is then processed to remove hits caused by noise. This is determined by whether or not the DOCA for that hit is greater than the size of the cell. The list of clumps is then processed into a list of clusters by removing any clumps which have less than four total hits within them. A linear fit is then performed on the hits which comprise each cluster. The clusters are split into multiple clusters if necessary by means of checking the \textchi{}\textsuperscript{2} of a linear fit. Following that step, the program updates the raw hits with various additional information and stores them as well as the clusters. |
===Clusters to Segments=== | ===Clusters to Segments=== | ||
Line 173: | Line 764: | ||
====Linked Helices==== | ====Linked Helices==== | ||
− | First, track candidates that don't have at least five segments within their crosses are culled. Then four initial fits are performed, one for each combination of one of the region one segments being coupled with one of the region three segments. In these fits, the slopes of the segments are used to calculate the momentum and charge of the particle. Those values are then input into a swimmer which will swim from the region one cross to | + | First, track candidates that don't have at least five segments within their crosses are culled. Then four initial fits are performed, one for each combination of one of the region one segments being coupled with one of the region three segments. In these fits, the slopes of the segments are used to calculate the momentum and charge of the particle. Those values are then input into a swimmer which will swim from the region one cross to the x-y plane of the region two cross and then on to the x-y plane of the region three cross. If the \textchi{}\textsuperscript{2} is bad or the integrated magnetic field over the path is zero, the candidate is culled. |
− | The momentum and charge are reconstructed, again, from the second segment of the region 1 and 3 crosses. If the momentum is greater than 11GeV, it is set to 11GeV. A state vector is formed from combining the region 1 cross information with the momentum. A Kalman fitter is created and attempts to fit the crosses. If this fails, the candidate is culled. The \textchi{}\textsuperscript{2} of the fit between the Kalman fit and the crosses is | + | The momentum and charge are reconstructed, again, from the second segment of the region 1 and 3 crosses. If the momentum is greater than 11GeV, it is set to 11GeV. A state vector is formed from combining the region 1 cross information with the momentum. A Kalman fitter is created and attempts to fit the crosses. If this fails, the candidate is culled. The \textchi{}\textsuperscript{2} of the fit between the Kalman fit and the crosses is calculated, and the candidate is culled if the value is too large. One final check to see if the Kalman fit either failed or if the final state vector produced by it is done. If passed, the candidate is added to the list of track candidates. |
===Crosses to Tracks Part II=== | ===Crosses to Tracks Part II=== | ||
Line 192: | Line 783: | ||
===Reconstructing Tracks=== | ===Reconstructing Tracks=== | ||
− | The algorithm now begins track reconstruction, failing if the Hit Based Tracks were not saved correctly. First, all of the Hit Based track information stored in the data banks are read into an array, and all of the segment-track associations are set. The algorithm now considers each track, failing if the track has less than four segments, and creating crosses in the same fashion as the Hit Based algorithm. These tracks are now run through a Kalman fitter. The track is culled | + | The algorithm now begins track reconstruction, failing if the Hit Based Tracks were not saved correctly. First, all of the Hit Based track information stored in the data banks are read into an array, and all of the segment-track associations are set. The algorithm now considers each track, failing if the track has less than four segments, and creating crosses in the same fashion as the Hit Based algorithm. These tracks are now run through a Kalman fitter. The track is culled if the interaction vertex is too large. Finally, for each surviving track, the overall trajectory is calculated and associated hits and segments are set accordingly. |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | == | + | ==Appendix C== |
− | + | This appendix contains the source code for the Raster Based reconstruction algorithm. There are some minor cosmetic changes to improve readability in this document. | |
− | = | + | \begin{lstlisting}[showspaces=false,showstringspaces=false] |
− | + | package org.jlab.service.dc; | |
− | |||
− | |||
− | |||
− | + | import java.io.File; | |
+ | import java.io.IOException; | ||
+ | import java.util.ArrayList; | ||
+ | import java.util.Arrays; | ||
+ | import java.util.concurrent.Callable; | ||
+ | import java.util.concurrent.ExecutionException; | ||
+ | import java.util.concurrent.ExecutorService; | ||
+ | import java.util.concurrent.Executors; | ||
+ | import java.util.concurrent.Future; | ||
+ | import java.util.concurrent.TimeUnit; | ||
+ | import java.util.logging.Level; | ||
+ | import java.util.logging.Logger; | ||
+ | import org.jlab.clas.swimtools.MagFieldsEngine; | ||
+ | import org.jlab.clas.swimtools.Swim; | ||
+ | import org.jlab.clas.swimtools.Swimmer; | ||
+ | import org.jlab.io.base.DataBank; | ||
+ | import org.jlab.io.base.DataEvent; | ||
+ | import org.jlab.io.hipo.HipoDataEvent; | ||
+ | import org.jlab.io.hipo.HipoDataSource; | ||
+ | import org.jlab.io.hipo.HipoDataSync; | ||
+ | import org.jlab.jnp.hipo.schema.SchemaFactory; | ||
+ | import org.jlab.rec.dc.Constants; | ||
− | ===Vertex Search=== | + | /** |
− | This | + | * This class is intended to be used to reconstruct the interaction vertex from |
+ | * previously calculated HB/TB and CVT data | ||
+ | * | ||
+ | * @author friant | ||
+ | */ | ||
+ | public class DCRBEngine extends DCEngine { | ||
+ | //Global Variables | ||
+ | static String engine = ""; | ||
+ | static float solVal = -1.0f * -1.0f;//-1.0 multiplier to correct for sign convention mismatch | ||
+ | static float torVal = -1.0f; | ||
+ | static float zMinGlobal = -10.0f; | ||
+ | static float zMaxGlobal = 10.0f; | ||
+ | static float percentGridSearch = 0.10f; | ||
+ | static int samplesGridSearch = 5; | ||
+ | static int iterationsVertex = 2; | ||
+ | static int samplesVertex = 5; | ||
+ | static int maxThreads = 1; | ||
+ | |||
+ | /** | ||
+ | * Constructor | ||
+ | */ | ||
+ | public DCRBEngine() { | ||
+ | super("DCRB"); | ||
+ | } | ||
+ | |||
+ | /** | ||
+ | * Main method | ||
+ | * | ||
+ | * @param args | ||
+ | */ | ||
+ | public static void main(String[] args){ | ||
+ | //Ensure that RasterBased Data Banks are defined in the program | ||
+ | SchemaFactory fact = new SchemaFactory(); | ||
+ | fact.initFromDirectory("CLAS12DIR", "etc/bankdefs/hipo"); | ||
+ | |||
+ | //Hipo Reader and Writer | ||
+ | HipoDataSource reader = new HipoDataSource(); | ||
+ | HipoDataSync writer = new HipoDataSync(fact); | ||
+ | |||
+ | //Files | ||
+ | File inputFile; | ||
+ | File outputFile; | ||
+ | |||
+ | //Command Line Options | ||
+ | boolean help = false; | ||
+ | |||
+ | //Required | ||
+ | String input = ""; | ||
+ | String output = ""; | ||
+ | |||
+ | //Optional | ||
+ | String gridSearchSamples = ""; | ||
+ | String gridSearchPercent = ""; | ||
+ | String solenoid = ""; | ||
+ | String toroid = ""; | ||
+ | String vertexSamples = ""; | ||
+ | String vertexIterations = ""; | ||
+ | String threads = ""; | ||
+ | String lowerBound = ""; | ||
+ | String upperBound = ""; | ||
+ | |||
+ | //Parse | ||
+ | for(int i = 0; i < args.length; i++){ | ||
+ | switch(args[i]){ | ||
+ | case "-i": | ||
+ | if(i + 1 < args.length){ | ||
+ | input = args[i + 1]; | ||
+ | } | ||
+ | else{ | ||
+ | System.out.println("Missing Parameter For Option -i"); | ||
+ | return; | ||
+ | } | ||
+ | break; | ||
+ | case "-o": | ||
+ | if(i + 1 < args.length){ | ||
+ | output = args[i + 1]; | ||
+ | } | ||
+ | else{ | ||
+ | System.out.println("Missing Parameter For Option -o"); | ||
+ | return; | ||
+ | } | ||
+ | break; | ||
+ | case "-e": | ||
+ | if(i + 1 < args.length){ | ||
+ | engine = args[i + 1]; | ||
+ | } | ||
+ | else{ | ||
+ | System.out.println("Missing Parameter For Option -e"); | ||
+ | return; | ||
+ | } | ||
+ | break; | ||
+ | case "-g": | ||
+ | if(i + 2 < args.length){ | ||
+ | gridSearchSamples = args[i + 1]; | ||
+ | gridSearchPercent = args[i + 2]; | ||
+ | } | ||
+ | else{ | ||
+ | System.out.println("Missing Parameter For Option -g"); | ||
+ | return; | ||
+ | } | ||
+ | break; | ||
+ | case "-h": | ||
+ | help = true; | ||
+ | break; | ||
+ | case "-s": | ||
+ | if(i + 1 < args.length){ | ||
+ | solenoid = args[i + 1]; | ||
+ | } | ||
+ | else{ | ||
+ | System.out.println("Missing Parameter For Option -s"); | ||
+ | return; | ||
+ | } | ||
+ | break; | ||
+ | case "-t": | ||
+ | if(i + 1 < args.length){ | ||
+ | toroid = args[i + 1]; | ||
+ | } | ||
+ | else{ | ||
+ | System.out.println("Missing Parameter For Option -t"); | ||
+ | return; | ||
+ | } | ||
+ | break; | ||
+ | case "-v": | ||
+ | if(i + 2 < args.length){ | ||
+ | vertexSamples = args[i + 1]; | ||
+ | vertexIterations = args[i + 2]; | ||
+ | } | ||
+ | else{ | ||
+ | System.out.println("Missing Parameter For Option -v"); | ||
+ | return; | ||
+ | } | ||
+ | break; | ||
+ | case "-x": | ||
+ | if(i + 1 < args.length){ | ||
+ | threads = args[i + 1]; | ||
+ | } | ||
+ | else{ | ||
+ | System.out.println("Missing Parameter For Option -x"); | ||
+ | return; | ||
+ | } | ||
+ | break; | ||
+ | case "-z": | ||
+ | if(i + 2 < args.length){ | ||
+ | lowerBound = args[i + 1]; | ||
+ | upperBound = args[i + 2]; | ||
+ | } | ||
+ | else{ | ||
+ | System.out.println("Missing Parameter For Option -z"); | ||
+ | return; | ||
+ | } | ||
+ | break; | ||
+ | } | ||
+ | } | ||
+ | |||
+ | //Attempt to use command line parameters to set values | ||
+ | ////Input File | ||
+ | if(input.isEmpty()){ | ||
+ | System.out.println("Input File Required"); | ||
+ | help = true; | ||
+ | } | ||
+ | else{ | ||
+ | inputFile = new File(input); | ||
+ | if(inputFile.exists() && !inputFile.isDirectory()){ | ||
+ | reader.open(inputFile); | ||
+ | } | ||
+ | else{ | ||
+ | System.out.println("Input File Not Found"); | ||
+ | return; | ||
+ | } | ||
+ | } | ||
+ | ////output File | ||
+ | if(output.isEmpty()){ | ||
+ | System.out.println("Output File Required"); | ||
+ | help = true; | ||
+ | } | ||
+ | else{ | ||
+ | outputFile = new File(output); | ||
+ | if(outputFile.exists() && !outputFile.isDirectory()){ | ||
+ | outputFile.delete(); | ||
+ | } | ||
+ | try{ | ||
+ | outputFile.createNewFile(); | ||
+ | writer.open(outputFile.getAbsolutePath()); | ||
+ | } | ||
+ | catch(IOException e){ | ||
+ | System.out.println("Could Not Create Output File"); | ||
+ | return; | ||
+ | } | ||
+ | } | ||
+ | ////Engine | ||
+ | if(!engine.isEmpty()){ | ||
+ | if(!(engine.equals("DCHB") || engine.equals("DCTB"))){ | ||
+ | System.out.println("Invalid Engine Specifified"); | ||
+ | help = true; | ||
+ | } | ||
+ | } | ||
+ | ////Grid Search Parameters | ||
+ | if(!(gridSearchSamples.isEmpty() || gridSearchPercent.isEmpty())){ | ||
+ | try{ | ||
+ | samplesGridSearch = Integer.parseInt(gridSearchSamples); | ||
+ | percentGridSearch = Float.parseFloat(gridSearchPercent); | ||
+ | } | ||
+ | catch(NumberFormatException e){ | ||
+ | System.out.println("Invalid Number Format For Grid Search Parameters"); | ||
+ | help = true; | ||
+ | } | ||
+ | } | ||
+ | ////Solenoid Scale | ||
+ | if(!solenoid.isEmpty()){ | ||
+ | try{ | ||
+ | solVal = Float.parseFloat(solenoid); | ||
+ | solVal = -1.0f * solVal;//-1.0 multiplier to correct for sign convention mismatch | ||
+ | } | ||
+ | catch(NumberFormatException e){ | ||
+ | System.out.println("Invalid Number Format For Solenoid"); | ||
+ | help = true; | ||
+ | } | ||
+ | } | ||
+ | ////Toroid Scale | ||
+ | if(!toroid.isEmpty()){ | ||
+ | try{ | ||
+ | torVal = Float.parseFloat(toroid); | ||
+ | } | ||
+ | catch(NumberFormatException e){ | ||
+ | System.out.println("Invalid Number Format For Toroid"); | ||
+ | help = true; | ||
+ | } | ||
+ | } | ||
+ | ////Vertex Search Parameters | ||
+ | if(!(vertexSamples.isEmpty() || vertexIterations.isEmpty())){ | ||
+ | try{ | ||
+ | samplesVertex = Integer.parseInt(vertexSamples); | ||
+ | iterationsVertex = Integer.parseInt(vertexIterations); | ||
+ | } | ||
+ | catch(NumberFormatException e){ | ||
+ | System.out.println("Invalid Number Format For Vertex Search Parameters"); | ||
+ | help = true; | ||
+ | } | ||
+ | } | ||
+ | ////Threads | ||
+ | if(!threads.isEmpty()){ | ||
+ | try{ | ||
+ | maxThreads = Integer.parseInt(threads); | ||
+ | } | ||
+ | catch(NumberFormatException e){ | ||
+ | System.out.println("Invalid Number Format For Number Of Threads To Use"); | ||
+ | help = true; | ||
+ | } | ||
+ | } | ||
+ | ////Target Bounds | ||
+ | if(!(lowerBound.isEmpty() || upperBound.isEmpty())){ | ||
+ | try{ | ||
+ | zMinGlobal = Float.parseFloat(lowerBound); | ||
+ | zMaxGlobal = Float.parseFloat(upperBound); | ||
+ | } | ||
+ | catch(NumberFormatException e){ | ||
+ | System.out.println("Invalid Number Format For Target Bounds"); | ||
+ | help = true; | ||
+ | } | ||
+ | } | ||
+ | |||
+ | //Print help message and exit | ||
+ | if(help){ | ||
+ | printHelp(); | ||
+ | return; | ||
+ | } | ||
+ | |||
+ | //Init Engines | ||
+ | MagFieldsEngine magField = new MagFieldsEngine(); | ||
+ | magField.init(); | ||
+ | DCRBEngine thisEngine = new DCRBEngine(); | ||
+ | thisEngine.init(); | ||
+ | |||
+ | //Apply magnetic field scaling | ||
+ | Swimmer.setMagneticFieldsScales(solVal, torVal, 0.0); | ||
+ | |||
+ | //Process data events | ||
+ | int count = 0; | ||
+ | while(reader.hasEvent()){ | ||
+ | DataEvent event = reader.getNextEvent(); | ||
+ | if(!event.hasBank("RasterBasedTrkg::RBHits")){ | ||
+ | ((HipoDataEvent)event).initDictionary(fact); | ||
+ | } | ||
+ | thisEngine.processDataEvent(event); | ||
+ | writer.writeEvent(event); | ||
+ | System.out.println("EVENT " + count + " PROCESSED\r\n"); | ||
+ | count++; | ||
+ | } | ||
+ | |||
+ | //Tidy up before leaving | ||
+ | reader.close(); | ||
+ | writer.close(); | ||
+ | } | ||
+ | |||
+ | /** | ||
+ | * Print help message to the terminal | ||
+ | */ | ||
+ | private static void printHelp(){ | ||
+ | System.out.println( | ||
+ | "FriTracking Command Line Options: \r\n" | ||
+ | + " -Required: \r\n" | ||
+ | + " -i Input Hipo File \r\n" | ||
+ | + " Requires 1 Parameter: \r\n" | ||
+ | + " 1 (String): Path to desired input file. \r\n" | ||
+ | + " Ex: -i /path/to/my/input/file.hipo \r\n" | ||
+ | + " \r\n" | ||
+ | + " -o Output Hipo File \r\n" | ||
+ | + " Requires 1 Parameter: \r\n" | ||
+ | + " 1 (String): Path to the desried ouput file. \r\n" | ||
+ | + " Will overwrite file if exists. \r\n" | ||
+ | + " Ex: -o /path/to/my/output/file.hipo \r\n" | ||
+ | + " \r\n" | ||
+ | + " -Optional: \r\n" | ||
+ | + " -e Engine To Source Data From \r\n" | ||
+ | + " Requires 1 Parameter: \r\n" | ||
+ | + " 1 (String): Either \"DCHB\" or \"DCTB\" \r\n" | ||
+ | + " Default Behavior: \r\n" | ||
+ | + " Use DCTB if available and DCHB if not. \r\n" | ||
+ | + " Ex: -e DCHB \r\n" | ||
+ | + " \r\n" | ||
+ | + " -g Grid Search Parameters \r\n" | ||
+ | + " Requires 2 Parameters: \r\n" | ||
+ | + " 1 (Int ): The number of samples to take per\r\n" | ||
+ | + " momentum dimension around the \r\n" | ||
+ | + " region-1 cross. \r\n" | ||
+ | + " 2 (Float ): The percent of the nominal value \r\n" | ||
+ | + " to search within. \r\n" | ||
+ | + " Default Behavior: \r\n" | ||
+ | + " 5 Samples within 10% of the nominal value \r\n" | ||
+ | + " Ex: -g 7 0.02 \r\n" | ||
+ | + " \r\n" | ||
+ | + " -h Print This Message \r\n" | ||
+ | + " \r\n" | ||
+ | + " -s Solenoid Field Scale Factor \r\n" | ||
+ | + " Requires 1 Parameter \r\n" | ||
+ | + " 1 (Float ): The number by which to scale the \r\n" | ||
+ | + " solenoid's magnetic field. Note: \r\n" | ||
+ | + " a value of 0.0 will likely cause \r\n" | ||
+ | + " failure. \r\n" | ||
+ | + " Default Behavior: \r\n" | ||
+ | + " Scale by -1.0. \r\n" | ||
+ | + " Ex: -s 0.05 \r\n" | ||
+ | + " \r\n" | ||
+ | + " -t Toroid Field Scale Factor \r\n" | ||
+ | + " Requires 1 Parameter \r\n" | ||
+ | + " 1 (Float ): The number by which to scale the \r\n" | ||
+ | + " toroid's magnetic field. Note: a \r\n" | ||
+ | + " value of 0.0 will likely cause \r\n" | ||
+ | + " failure. \r\n" | ||
+ | + " Default Behavior: \r\n" | ||
+ | + " Scale by -1.0. \r\n" | ||
+ | + " Ex: -s 0.1 \r\n" | ||
+ | + " \r\n" | ||
+ | + " -v Vertex Search Parameters \r\n" | ||
+ | + " Requires 2 Parameters: \r\n" | ||
+ | + " 1 (Int ): The number of samples to take \r\n" | ||
+ | + " within the target range to find \r\n" | ||
+ | + " the best z value. \r\n" | ||
+ | + " 2 (Int ): The number of recursive \r\n" | ||
+ | + " iterations to go through about \r\n" | ||
+ | + " each local minima. \r\n" | ||
+ | + " Default Behavior: \r\n" | ||
+ | + " 5 samples with 2 iterations. \r\n" | ||
+ | + " Ex: -v 10 2 \r\n" | ||
+ | + " \r\n" | ||
+ | + " -x Threads To Use \r\n" | ||
+ | + " Requires 1 Parameter: \r\n" | ||
+ | + " 1 (Int ): The maximum number of threads to \r\n" | ||
+ | + " use while searching for minima. \r\n" | ||
+ | + " Default Behavior: \r\n" | ||
+ | + " Only use 1 thread. \r\n" | ||
+ | + " Ex: -x 4 \r\n" | ||
+ | + " \r\n" | ||
+ | + " -z Target Bounds \r\n" | ||
+ | + " Requires 2 Parameters: \r\n" | ||
+ | + " 1 (Float ): The lower bound in the \r\n" | ||
+ | + " z-direction in which to search \r\n" | ||
+ | + " for the interaction vertex. \r\n" | ||
+ | + " [Unit = cm] \r\n" | ||
+ | + " 2 (Float ): The upper bound in the \r\n" | ||
+ | + " z-direction in which to search \r\n" | ||
+ | + " for the interaction vertex. \r\n" | ||
+ | + " [Unit = cm] \r\n" | ||
+ | + " Default Behavior: \r\n" | ||
+ | + " Lower Bound = -10.0, Upper Bound = 10.0. \r\n" | ||
+ | + " Ex: -z -2.5 5.0 \r\n" | ||
+ | + "\r\n"); | ||
+ | } | ||
+ | |||
+ | /** | ||
+ | * Initialize the engine | ||
+ | * | ||
+ | * @return | ||
+ | */ | ||
+ | @Override | ||
+ | public boolean init() { | ||
+ | // Load cuts | ||
+ | Constants.Load(); | ||
+ | super.setStartTimeOption(); | ||
+ | super.LoadTables(); | ||
+ | return true; | ||
+ | } | ||
+ | |||
+ | /** | ||
+ | * Generic process data event function | ||
+ | * | ||
+ | * @param event | ||
+ | * @return | ||
+ | */ | ||
+ | @Override | ||
+ | public boolean processDataEvent(DataEvent event){ | ||
+ | boolean hasCVTData = event.hasBank("CVTRec::Tracks"); | ||
+ | boolean hasHBData = event.hasBank("HitBasedTrkg::HBTracks"); | ||
+ | |||
+ | if(hasCVTData && hasHBData){ | ||
+ | //processDataEventBoth(event);//Just use HB for now | ||
+ | processDataEventHB(event); | ||
+ | } | ||
+ | else if(hasCVTData){ | ||
+ | processDataEventCVT(event); | ||
+ | } | ||
+ | else if(hasHBData){ | ||
+ | processDataEventHB(event); | ||
+ | } | ||
+ | |||
+ | return true; | ||
+ | } | ||
+ | |||
+ | /** | ||
+ | * Process data event for when HB and CVT data is available | ||
+ | * | ||
+ | * @param event | ||
+ | * @return | ||
+ | */ | ||
+ | public boolean processDataEventBoth(DataEvent event){ | ||
+ | System.out.println("Process for Both CVT and HB Not Yet Implemented"); | ||
+ | return true; | ||
+ | } | ||
+ | |||
+ | /** | ||
+ | * Process data event for when only CVT data is available | ||
+ | * | ||
+ | * @param event | ||
+ | * @return | ||
+ | */ | ||
+ | public boolean processDataEventCVT(DataEvent event){ | ||
+ | System.out.println("Process for CVT Not Yet Implemented"); | ||
+ | return true; | ||
+ | } | ||
+ | |||
+ | /** | ||
+ | * Process data event for when only HB/TB data is available | ||
+ | * | ||
+ | * @param event | ||
+ | * @return | ||
+ | */ | ||
+ | public boolean processDataEventHB(DataEvent event) { | ||
+ | //Pull info out of TB/HB Banks | ||
+ | String sourceTracks; | ||
+ | String sourceCrosses; | ||
+ | if(engine.isEmpty()){ | ||
+ | if(event.hasBank("TimeBasedTrkg::TBTracks")){ | ||
+ | event.appendBank(copyBank(event, "TimeBasedTrkg::TBHits", "RasterBasedTrkg::RBHits")); | ||
+ | event.appendBank(copyBank(event, "TimeBasedTrkg::TBClusters", "RasterBasedTrkg::RBClusters")); | ||
+ | event.appendBank(copyBank(event, "TimeBasedTrkg::TBSegments", "RasterBasedTrkg::RBSegments")); | ||
+ | event.appendBank(copyBank(event, "TimeBasedTrkg::TBCrosses", "RasterBasedTrkg::RBCrosses")); | ||
+ | sourceTracks = "TimeBasedTrkg::TBTracks"; | ||
+ | sourceCrosses = "TimeBasedTrkg::TBCrosses"; | ||
+ | } | ||
+ | else{ | ||
+ | event.appendBank(copyBank(event, "HitBasedTrkg::HBHits", "RasterBasedTrkg::RBHits")); | ||
+ | event.appendBank(copyBank(event, "HitBasedTrkg::HBClusters", "RasterBasedTrkg::RBClusters")); | ||
+ | event.appendBank(copyBank(event, "HitBasedTrkg::HBSegments", "RasterBasedTrkg::RBSegments")); | ||
+ | event.appendBank(copyBank(event, "HitBasedTrkg::HBCrosses", "RasterBasedTrkg::RBCrosses")); | ||
+ | sourceTracks = "HitBasedTrkg::HBTracks"; | ||
+ | sourceCrosses = "HitBasedTrkg::HBCrosses"; | ||
+ | } | ||
+ | } | ||
+ | else if(engine.equals("DCHB")){ | ||
+ | if(event.hasBank("TimeBasedTrkg::TBTracks")){ | ||
+ | event.appendBank(copyBank(event, "HitBasedTrkg::HBHits", "RasterBasedTrkg::RBHits")); | ||
+ | event.appendBank(copyBank(event, "HitBasedTrkg::HBClusters", "RasterBasedTrkg::RBClusters")); | ||
+ | event.appendBank(copyBank(event, "HitBasedTrkg::HBSegments", "RasterBasedTrkg::RBSegments")); | ||
+ | event.appendBank(copyBank(event, "HitBasedTrkg::HBCrosses", "RasterBasedTrkg::RBCrosses")); | ||
+ | sourceTracks = "HitBasedTrkg::HBTracks"; | ||
+ | sourceCrosses = "HitBasedTrkg::HBCrosses"; | ||
+ | } | ||
+ | else{ | ||
+ | return false; | ||
+ | } | ||
+ | } | ||
+ | else if(engine.equals("DCTB")){ | ||
+ | if(event.hasBank("TimeBasedTrkg::TBTracks")){ | ||
+ | event.appendBank(copyBank(event, "TimeBasedTrkg::TBHits", "RasterBasedTrkg::RBHits")); | ||
+ | event.appendBank(copyBank(event, "TimeBasedTrkg::TBClusters", "RasterBasedTrkg::RBClusters")); | ||
+ | event.appendBank(copyBank(event, "TimeBasedTrkg::TBSegments", "RasterBasedTrkg::RBSegments")); | ||
+ | event.appendBank(copyBank(event, "TimeBasedTrkg::TBCrosses", "RasterBasedTrkg::RBCrosses")); | ||
+ | sourceTracks = "TimeBasedTrkg::TBTracks"; | ||
+ | sourceCrosses = "TimeBasedTrkg::TBCrosses"; | ||
+ | } | ||
+ | else{ | ||
+ | return false; | ||
+ | } | ||
+ | } | ||
+ | else{ | ||
+ | return false; | ||
+ | } | ||
+ | |||
+ | //Create the RBTracks Bank | ||
+ | DataBank rbBank = copyBank(event, sourceTracks, | ||
+ | "RasterBasedTrkg::RBTracks"); | ||
+ | |||
+ | //Raster variables | ||
+ | float rasterX = 0.0f; | ||
+ | float rasterY = 0.0f; | ||
+ | float rasterUX = 0.05f;//0.5mm uncertainty in either direction | ||
+ | float rasterUY = 0.05f;//0.5mm uncertainty in either direction | ||
+ | if(event.hasBank("MC::Particle")){ | ||
+ | rasterX = event.getBank("MC::Particle").getFloat("vx", 0); | ||
+ | rasterY = event.getBank("MC::Particle").getFloat("vy", 0); | ||
+ | } | ||
+ | float[] rasterInfo = new float[]{rasterX, rasterUX, | ||
+ | rasterY, rasterUY}; | ||
+ | |||
+ | //Calculate the interaction vertex for each for each track in the event | ||
+ | for(int i = 0; i < rbBank.rows(); i++){ | ||
+ | float[] trackInfo = new float[7]; | ||
+ | trackInfo[0] = event.getBank(sourceTracks).getFloat("t1_x" , i); | ||
+ | trackInfo[1] = event.getBank(sourceTracks).getFloat("t1_y" , i); | ||
+ | trackInfo[2] = event.getBank(sourceTracks).getFloat("t1_z" , i); | ||
+ | trackInfo[3] = event.getBank(sourceTracks).getFloat("t1_px", i); | ||
+ | trackInfo[4] = event.getBank(sourceTracks).getFloat("t1_py", i); | ||
+ | trackInfo[5] = event.getBank(sourceTracks).getFloat("t1_pz", i); | ||
+ | trackInfo[6] = (float)(event.getBank(sourceTracks).getByte("q", i)); | ||
+ | |||
+ | //Calculate | ||
+ | float[] output = new float[8]; | ||
+ | Arrays.fill(output, Float.NaN); | ||
+ | float doca = -1.0f; | ||
+ | try { | ||
+ | doca = interactionVertexGridSearch(samplesGridSearch, rasterInfo, trackInfo, percentGridSearch, output); | ||
+ | } catch (InterruptedException | ExecutionException exception) { | ||
+ | Logger.getLogger(DCRBEngine.class.getName()).log(Level.SEVERE, null, exception); | ||
+ | System.out.println(exception); | ||
+ | } | ||
+ | |||
+ | //Make sure that the momentum is pointing in the right direction | ||
+ | if(output[5] < 0.0){ | ||
+ | output[3] = output[3] * -1.0f; | ||
+ | output[4] = output[4] * -1.0f; | ||
+ | output[5] = output[5] * -1.0f; | ||
+ | } | ||
+ | |||
+ | //System.out.println("Doca: " + doca); | ||
+ | //System.out.println(Arrays.toString(output)); | ||
+ | |||
+ | //Set Values | ||
+ | rbBank.setFloat("Vtx0_x", i, output[0]); | ||
+ | rbBank.setFloat("Vtx0_y", i, output[1]); | ||
+ | rbBank.setFloat("Vtx0_z", i, output[2]); | ||
+ | rbBank.setFloat( "p0_x", i, output[3]); | ||
+ | rbBank.setFloat( "p0_y", i, output[4]); | ||
+ | rbBank.setFloat( "p0_z", i, output[5]); | ||
+ | rbBank.setFloat( "doca", i, doca); | ||
+ | } | ||
+ | |||
+ | //Put Tracks in Event | ||
+ | event.appendBank(rbBank); | ||
+ | |||
+ | return true; | ||
+ | } | ||
+ | |||
+ | /** | ||
+ | * This method performs a three dimensional grid search within the | ||
+ | * uncertainty in the track's momentum. | ||
+ | * | ||
+ | * @param samples the number of samples to ake in each dimension | ||
+ | * @param rasterInfo the array {x, uncert x, y, uncert y} | ||
+ | * @param trackInfo the array{vx, vy, vz, px, py, pz} | ||
+ | * @param uncertainty + or - percent uncertainty in track momentum | ||
+ | * @param out the array to fill with the ultimate swim output | ||
+ | * result | ||
+ | * @return the doca of the ultimate swim output result | ||
+ | */ | ||
+ | private float interactionVertexGridSearch(int samples, float[] rasterInfo, float[] trackInfo, float uncertainty, float[] out) throws InterruptedException, ExecutionException{ | ||
+ | //define bounds | ||
+ | float upperBoundPX = trackInfo[3] * (1.0f + uncertainty); | ||
+ | float lowerBoundPX = trackInfo[3] * (1.0f - uncertainty); | ||
+ | float upperBoundPY = trackInfo[4] * (1.0f + uncertainty); | ||
+ | float lowerBoundPY = trackInfo[4] * (1.0f - uncertainty); | ||
+ | float upperBoundPZ = trackInfo[5] * (1.0f + uncertainty); | ||
+ | float lowerBoundPZ = trackInfo[5] * (1.0f - uncertainty); | ||
+ | |||
+ | //choose the max radius of uncertainty to check doca against | ||
+ | float rasterErr = Math.max(rasterInfo[1], rasterInfo[3]); | ||
+ | |||
+ | //define useful arrays | ||
+ | float[][][][] output = new float[samples][samples][samples][8]; | ||
+ | ArrayList<ArrayList<ArrayList<Future<Float>>>> doca = new ArrayList(); | ||
+ | ArrayList<int[]> localMinIndices = new ArrayList<>(); | ||
+ | |||
+ | //Define thread service | ||
+ | ExecutorService ex = Executors.newFixedThreadPool(maxThreads); | ||
+ | |||
+ | //Find the interaction vertices and docas to sift through | ||
+ | for(int i = 0; i < samples; i++){ | ||
+ | doca.add(new ArrayList<>()); | ||
+ | for(int j = 0; j < samples; j++){ | ||
+ | doca.get(i).add(new ArrayList<>()); | ||
+ | for(int k = 0; k < samples; k++){ | ||
+ | //Set Swimmer params | ||
+ | float[] swimParams = new float[]{ | ||
+ | trackInfo[0], | ||
+ | trackInfo[1], | ||
+ | trackInfo[2], | ||
+ | lowerBoundPX + i * (upperBoundPX - lowerBoundPX) / (samples - 1), | ||
+ | lowerBoundPY + j * (upperBoundPY - lowerBoundPY) / (samples - 1), | ||
+ | lowerBoundPZ + k * (upperBoundPZ - lowerBoundPZ) / (samples - 1), | ||
+ | trackInfo[6]}; | ||
− | + | //Get interaction vertices | |
+ | doca.get(i).get(j).add(ex.submit(new ThreadedVertexFinder( | ||
+ | iterationsVertex, | ||
+ | samplesVertex, | ||
+ | zMinGlobal, | ||
+ | zMaxGlobal, | ||
+ | rasterInfo[0], | ||
+ | rasterInfo[2], | ||
+ | swimParams, | ||
+ | output[i][j][k]))); | ||
+ | } | ||
+ | } | ||
+ | } | ||
+ | |||
+ | //Wait for all tasks to finish | ||
+ | ex.shutdown(); | ||
+ | ex.awaitTermination(10, TimeUnit.DAYS);//Arbitrarily large | ||
+ | |||
+ | //Find local mins | ||
+ | for(int i = 0; i < samples; i++){ | ||
+ | for(int j = 0; j < samples; j++){ | ||
+ | for(int k = 0; k < samples; k++){ | ||
+ | //check if within rasterErr | ||
+ | if(doca.get(i).get(j).get(k).get() < rasterErr){ | ||
+ | localMinIndices.add(new int[]{i, j, k}); | ||
+ | continue; | ||
+ | } | ||
+ | //check if unrealistically massive | ||
+ | if(doca.get(i).get(j).get(k).get() > Float.MAX_VALUE / 2.0){ | ||
+ | continue; | ||
+ | } | ||
+ | //check px component | ||
+ | if(i == 0 && doca.get(i).get(j).get(k).get() > doca.get(i + 1).get(j).get(k).get()){ | ||
+ | continue; | ||
+ | } | ||
+ | else if (i > 0 && i < samples - 1 && (doca.get(i).get(j).get(k).get() > doca.get(i + 1).get(j).get(k).get() || doca.get(i).get(j).get(k).get() > doca.get(i - 1).get(j).get(k).get())){ | ||
+ | continue; | ||
+ | } | ||
+ | else if(i == samples - 1 && doca.get(i).get(j).get(k).get() > doca.get(i - 1).get(j).get(k).get()){ | ||
+ | continue; | ||
+ | } | ||
+ | //check py component | ||
+ | if(j == 0 && doca.get(i).get(j).get(k).get() > doca.get(i).get(j + 1).get(k).get()){ | ||
+ | continue; | ||
+ | } | ||
+ | else if (j > 0 && j < samples - 1 && (doca.get(i).get(j).get(k).get() > doca.get(i).get(j + 1).get(k).get() || doca.get(i).get(j).get(k).get() > doca.get(i).get(j - 1).get(k).get())){ | ||
+ | continue; | ||
+ | } | ||
+ | else if(j == samples - 1 && doca.get(i).get(j).get(k).get() > doca.get(i).get(j - 1).get(k).get()){ | ||
+ | continue; | ||
+ | } | ||
+ | //check pz component | ||
+ | if(k == 0 && doca.get(i).get(j).get(k).get() > doca.get(i).get(j).get(k + 1).get()){ | ||
+ | continue; | ||
+ | } | ||
+ | else if (k > 0 && k < samples - 1 && (doca.get(i).get(j).get(k).get() > doca.get(i).get(j).get(k + 1).get() || doca.get(i).get(j).get(k).get() > doca.get(i).get(j).get(k - 1).get())){ | ||
+ | continue; | ||
+ | } | ||
+ | else if(k == samples - 1 && doca.get(i).get(j).get(k).get() > doca.get(i).get(j).get(k - 1).get()){ | ||
+ | continue; | ||
+ | } | ||
+ | //add to local min list | ||
+ | localMinIndices.add(new int[]{i, j, k}); | ||
+ | } | ||
+ | } | ||
+ | } | ||
+ | |||
+ | //Cull values that fail zMinGlobal < z < zMaxGlobal | ||
+ | for(int i = 0; i < localMinIndices.size(); i++){ | ||
+ | int[] idx = localMinIndices.get(i); | ||
+ | if(output[idx[0]][idx[1]][idx[2]][2] < zMinGlobal || output[idx[0]][idx[1]][idx[2]][2] > zMaxGlobal){ | ||
+ | localMinIndices.remove(i); | ||
+ | i--; | ||
+ | } | ||
+ | } | ||
+ | |||
+ | //Exit | ||
+ | int[] i; | ||
+ | switch(localMinIndices.size()){ | ||
+ | case 0: | ||
+ | //Fail gracefully | ||
+ | Arrays.fill(out, Float.NaN); | ||
+ | return Float.NaN; | ||
+ | case 1: | ||
+ | //Only one value | ||
+ | i = localMinIndices.get(0); | ||
+ | System.arraycopy(output[i[0]][i[1]][i[2]], 0, out, 0, 8); | ||
+ | return doca.get(i[0]).get(i[1]).get(i[2]).get(); | ||
+ | default: | ||
+ | //Find value which minimizes change from nominal track | ||
+ | float minIdxDiff = Float.MAX_VALUE; | ||
+ | float minDoca = Float.MAX_VALUE; | ||
+ | boolean inRasterErr = false; | ||
+ | int minIndex = 0; | ||
+ | for(int[] idx : localMinIndices){ | ||
+ | float swimDoca = doca.get(idx[0]).get(idx[1]).get(idx[2]).get(); | ||
+ | //System.out.println("idx: " + Arrays.toString(idx) + ", doca: " + swimDoca + ", z: " + output[idx[0]][idx[1]][idx[2]][2]); | ||
+ | if(swimDoca < rasterErr){ | ||
+ | inRasterErr = true; | ||
+ | float idxDiff = (float)Math.sqrt(Math.pow(idx[0] - (float)samples / 2, 2.0) + | ||
+ | Math.pow(idx[1] - (float)samples / 2, 2.0) + | ||
+ | Math.pow(idx[2] - (float)samples / 2, 2.0)); | ||
+ | if(idxDiff < minIdxDiff){ | ||
+ | minIdxDiff = idxDiff; | ||
+ | minIndex = localMinIndices.indexOf(idx); | ||
+ | } | ||
+ | } | ||
+ | else if(inRasterErr == false){ | ||
+ | if(swimDoca < minDoca){ | ||
+ | minDoca = swimDoca; | ||
+ | minIndex = localMinIndices.indexOf(idx); | ||
+ | } | ||
+ | } | ||
+ | |||
+ | } | ||
+ | i = localMinIndices.get(minIndex); | ||
+ | //System.out.println("Winner: " + Arrays.toString(i)); | ||
+ | System.arraycopy(output[i[0]][i[1]][i[2]], 0, out, 0, 8); | ||
+ | return doca.get(i[0]).get(i[1]).get(i[2]).get(); | ||
+ | } | ||
+ | } | ||
+ | |||
+ | /** | ||
+ | * This method uses a swimmer to calculate the beam's doca position relative to the raster beam axis. | ||
+ | * | ||
+ | * @param iterations The maximum number of recursive steps | ||
+ | * @param samples The number of sample points to take between zMin and zMax for each step. | ||
+ | * @param zMin The lower bound of the area of interest. Should be below the lower bound of the target | ||
+ | * @param zMax The upper bound of the area of interest. Should be above the upper bound of the target | ||
+ | * @param rasterX The rasterized beam x position | ||
+ | * @param rasterY The rasterized beam y postiion | ||
+ | * @param swim A Swim class which has been set to the position and momentum of interest | ||
+ | * @param out A pointer to a float array which will be filled by this method. Will be set to the Swim output or NaN | ||
+ | * @return the doca to the rasterized beam coords; -1.0 if no mimimum was found | ||
+ | */ | ||
+ | private float findInteractionVertex(int iterations, int samples, float zMin, float zMax, float rasterX, float rasterY, Swim swim, float[] out){ | ||
+ | //Define useful arrays | ||
+ | float[][] swimOutput = new float[samples][8]; | ||
+ | float[] doca = new float[samples]; | ||
+ | int[] docaIndex = new int[samples]; | ||
+ | int docaLength = 0; | ||
+ | int[] localMin = new int[samples]; | ||
+ | int localMinLength = 0; | ||
+ | |||
+ | //Get swim outputs | ||
+ | for(int i = 0; i < samples; i++){ | ||
+ | double[] temp = swim.SwimToPlaneLab(zMin + i * (zMax - zMin) / (samples - 1)); | ||
+ | if(temp == null){ | ||
+ | swimOutput[i] = null; | ||
+ | continue; | ||
+ | } | ||
+ | for(int j = 0; j < 8; j++){ | ||
+ | swimOutput[i][j] = (float)temp[j]; | ||
+ | } | ||
+ | } | ||
+ | |||
+ | //Calculate the doca for each sample point | ||
+ | for(int i = 0; i < samples; i++){ | ||
+ | if(swimOutput[i] == null){ | ||
+ | continue; | ||
+ | } | ||
+ | doca[docaLength] = (float)Math.sqrt(Math.pow(rasterX - swimOutput[i][0], 2.0f) + Math.pow(rasterY - swimOutput[i][1], 2.0f)); | ||
+ | docaIndex[docaLength] = i; | ||
+ | docaLength++; | ||
+ | } | ||
+ | |||
+ | //Handle variable docaLengths | ||
+ | switch(docaLength){ | ||
+ | case 0: | ||
+ | //Fail gracefully | ||
+ | localMinLength = 0; | ||
+ | break; | ||
+ | case 1: | ||
+ | //Set only point as local minimum | ||
+ | localMinLength = 1; | ||
+ | localMin[0] = 0; | ||
+ | break; | ||
+ | default: | ||
+ | //Find local minima | ||
+ | if(doca[0] < doca[1]){ | ||
+ | localMin[localMinLength] = 0; | ||
+ | localMinLength++; | ||
+ | } for(int i = 1; i < docaLength - 1; i++){ | ||
+ | if(doca[i] < doca[i - 1] && doca[i] < doca[i + 1]){ | ||
+ | localMin[localMinLength] = i; | ||
+ | localMinLength++; | ||
+ | } | ||
+ | } if(doca[docaLength - 1] < doca[docaLength - 2]){ | ||
+ | localMin[localMinLength] = docaLength - 1; | ||
+ | localMinLength++; | ||
+ | } break; | ||
+ | } | ||
+ | |||
+ | //Exit? | ||
+ | if(iterations == 1){ | ||
+ | int index = -1; | ||
+ | float smallest = Float.MAX_VALUE; | ||
+ | |||
+ | //Find a minimum? | ||
+ | if(localMinLength == 0){ | ||
+ | Arrays.fill(out, Float.NaN); | ||
+ | return Float.MAX_VALUE; | ||
+ | } | ||
+ | |||
+ | //Find the smallest doca | ||
+ | for(int i = 0; i < localMinLength; i++){ | ||
+ | if(doca[localMin[i]] < smallest){ | ||
+ | index = i; | ||
+ | } | ||
+ | } | ||
+ | System.arraycopy(swimOutput[localMin[index]], 0, out, 0, 8); | ||
+ | return doca[localMin[index]]; | ||
+ | } | ||
+ | |||
+ | //Recursively call this method on each of the local minima | ||
+ | float[][] minOut = new float[localMinLength][8]; | ||
+ | float[] minDoca = new float[localMinLength]; | ||
+ | for(int i = 0; i < localMinLength; i++){ | ||
+ | if(docaIndex[localMin[i]] == 0){ | ||
+ | float newZMin = zMin - (zMax - zMin) / 2; | ||
+ | float newZMax = zMax - (zMax - zMin) / 2; | ||
+ | if(newZMin < 2.0 * zMinGlobal - zMaxGlobal){ | ||
+ | Arrays.fill(minOut[i], Float.NaN); | ||
+ | minDoca[i] = Float.MAX_VALUE; | ||
+ | } | ||
+ | else{ | ||
+ | minDoca[i] = findInteractionVertex(iterations, samples, newZMin, newZMax, rasterX, rasterY, swim, minOut[i]); | ||
+ | } | ||
+ | } | ||
+ | else if(docaIndex[localMin[i]] == samples - 1){ | ||
+ | float newZMin = zMin + (zMax - zMin) / 2; | ||
+ | float newZMax = zMax + (zMax - zMin) / 2; | ||
+ | if(newZMin > 2.0 * zMaxGlobal - zMinGlobal){ | ||
+ | Arrays.fill(minOut[i], Float.NaN); | ||
+ | minDoca[i] = Float.MAX_VALUE; | ||
+ | } | ||
+ | else{ | ||
+ | minDoca[i] = findInteractionVertex(iterations, samples, newZMin, newZMax, rasterX, rasterY, swim, minOut[i]); | ||
+ | } | ||
+ | } | ||
+ | else{ | ||
+ | minDoca[i] = findInteractionVertex(iterations - 1, samples, swimOutput[localMin[i]][2] - (zMax - zMin) / (samples - 1), swimOutput[localMin[i]][2] + (zMax - zMin) / (samples - 1), rasterX, rasterY, swim, minOut[i]); | ||
+ | } | ||
+ | } | ||
+ | |||
+ | //Find the smallest doca | ||
+ | int index = -1; | ||
+ | float smallest = Float.MAX_VALUE; | ||
+ | for(int i = 0; i < localMinLength; i++){ | ||
+ | if(minOut[i] != null && minDoca[i] < smallest) | ||
+ | { | ||
+ | index = i; | ||
+ | } | ||
+ | } | ||
+ | |||
+ | //Exit | ||
+ | if(index == -1){ | ||
+ | Arrays.fill(out, Float.NaN); | ||
+ | return Float.MAX_VALUE; | ||
+ | } | ||
+ | System.arraycopy(minOut[index], 0, out, 0, 8); | ||
+ | return minDoca[index]; | ||
+ | } | ||
+ | |||
+ | /** | ||
+ | * This class is a passthrough class to allow the findInteractionVertex | ||
+ | * method to be threaded | ||
+ | */ | ||
+ | private class ThreadedVertexFinder implements Callable<Float>{ | ||
+ | //Copies of method parameters | ||
+ | int iterations; | ||
+ | int samples; | ||
+ | float zMin; | ||
+ | float zMax; | ||
+ | float rasterX; | ||
+ | float rasterY; | ||
+ | Swim swim; | ||
+ | float[] out; | ||
+ | |||
+ | //Constructor | ||
+ | public ThreadedVertexFinder(int _iterations, int _samples, float _zMin, float _zMax, float _rasterX, float _rasterY, float[] _swimParams, float[] _out){ | ||
+ | iterations = _iterations; | ||
+ | samples = _samples; | ||
+ | zMin = _zMin; | ||
+ | zMax = _zMax; | ||
+ | rasterX = _rasterX; | ||
+ | rasterY = _rasterY; | ||
+ | swim = new Swim(); | ||
+ | swim.SetSwimParameters(_swimParams[0], _swimParams[1], _swimParams[2], _swimParams[3], _swimParams[4], _swimParams[5], (int)_swimParams[6]); | ||
+ | out = _out; | ||
+ | } | ||
+ | |||
+ | //Passthrough | ||
+ | @Override | ||
+ | public Float call(){ | ||
+ | return findInteractionVertex(iterations, samples, zMin, zMax, rasterX, rasterY, swim, out); | ||
+ | } | ||
+ | } | ||
+ | |||
+ | /** | ||
+ | * Provides a copy mechanism from a pre-existing dataBank to a new one. | ||
+ | * Note that this does not natively append the new bank to the old event. | ||
+ | * | ||
+ | * @param event | ||
+ | * @param oldBank | ||
+ | * @param newBank | ||
+ | */ | ||
+ | private DataBank copyBank(DataEvent event, String oldBankName, String newBankName){ | ||
+ | DataBank oldBank = event.getBank(oldBankName); | ||
+ | if(oldBank == null){ | ||
+ | return null; | ||
+ | } | ||
+ | |||
+ | DataBank newBank = event.createBank(newBankName, oldBank.rows()); | ||
+ | for(int i = 0; i < oldBank.rows(); i++){ | ||
+ | for(int j = 0; j < oldBank.columns(); j++){ | ||
+ | switch(oldBank.getDescriptor().getProperty("type", oldBank.getColumnList()[j])){ | ||
+ | case 1://Byte | ||
+ | newBank.setByte(oldBank.getColumnList()[j], i, (byte)(oldBank.getByte(oldBank.getColumnList()[j], i))); | ||
+ | break; | ||
+ | case 2://Short | ||
+ | newBank.setShort(oldBank.getColumnList()[j], i, (short)(oldBank.getShort(oldBank.getColumnList()[j], i))); | ||
+ | break; | ||
+ | case 3://Int | ||
+ | newBank.setInt(oldBank.getColumnList()[j], i, (int)(oldBank.getInt(oldBank.getColumnList()[j], i))); | ||
+ | break; | ||
+ | case 4://Unused | ||
+ | break; | ||
+ | case 5://Float | ||
+ | newBank.setFloat(oldBank.getColumnList()[j], i, (float)(oldBank.getFloat(oldBank.getColumnList()[j], i))); | ||
+ | break; | ||
+ | case 6://Double | ||
+ | newBank.setDouble(oldBank.getColumnList()[j], i, (double)(oldBank.getDouble(oldBank.getColumnList()[j], i))); | ||
+ | break; | ||
+ | } | ||
+ | } | ||
+ | } | ||
+ | return newBank; | ||
+ | } | ||
+ | } | ||
+ | \end{lstlisting} |
Latest revision as of 02:16, 11 June 2019
Abstract
\qquad{}The thesis reports the results of an improved track reconstruction program that is used to determine the momentum and interaction vertex of particles scattering in experiments performed using the CLAS12 Detector at Jefferson Lab. Particles are reconstructed from the CLAS12 Drift Chamber and scintillator detector sub-systems. The improved algorithm performs a three dimensional grid search in momentum space in an attempt to minimize the distance of closest approach between the particle’s path and the transverse location of the incident electrons that scatter from nucleons in a cryogenic target. Analysis has shown that the new algorithm is capable of improving the reconstruction in the X and Y directions (orthogonal to the incident electrons) by a factor of no less than ten, while at least maintaining the quality of the reconstruction in the Z direction.
\qquad{}An improvement in the reconstruction of the Z direction allows for statistical certainty in stating which part of the physics target an interaction occurred. This is of particular importance in the polarized SIDIS dual target, a physics target with two distinct sub-targets that can be filled with ammonia and polarized in different directions. Being able to state which of the two sub-targets an event occurred in paramount to being able to distinguish the difference in interactions between the two. Analysis of the proposed algorithm shows that one can resolve the two targets with if they are separated by at least 4cm.
Introduction
The CLAS12 is a charged and neutral particle detector constructed in Jefferson Lab's Hall B. The detector is used to reconstruct nuclear physics interactions that result when GeV energy electrons interact with select targets. CLAS12's precursor, the original CLAS (CEBAF Large Acceptance Spectrometer), was built to facilitate measurements of excited states of nucleons and characterize nucleon-nucleon correlations. Contemporaneous with the 12GeV upgrade to Jefferson Lab's accelerator, CLAS was removed and CLAS12 was built in its place.\autocite{hallBMainPage}
CLAS12 is designed to not only continue the mission of its predecessor, but to also measure generalized parton distributions of nucleons and search for exotic forms of matter. It is currently slated to perform experiments for more than ten years into the future and will likely continue to contribute even after that.\autocite{hallBMainPage} Figure \ref{fig:clas12-design} shows a model of the detector with labels detailing some of the sub detector's and other components. It is important to note here that CLAS12 is not a singular detector, but instead a suite of detectors which can be correlated together in order to more precisely reconstruct the interactions that occur within the physics target.
Alongside the construction of CLAS12, several software packages have been created to handle simulating the detector response, reconstructing events from detector the detector response, and visualizing the reconstructed interactions. For simulation, the CLAS Collaboration has developed a package that provides all of the necessary information to GEMC\autocite{gemc} (GEant4 Monte-Carlo), a general program designed to make interfacing with Geant4\autocite{geant4} easier. The program used for viewing events is called CED (CLAS Event Display), and is a continuation of a program originally designed for use with the CLAS. Most importantly for this document, as it is the topic of discussion, is the reconstruction software. Known as clas12-offline-software\autocite{coatjava}, it is a collection of various (mostly) java programs used to process the data collected from CLAS12 and reconstruct the observed particle mass, momentum, and location within the detector.
This thesis is two-fold in purpose. First, it is intended to provide an understanding of the current methods used to reconstruct particle trajectories from the drift chamber sub-detectors of CLAS12. This includes both a theoretical understanding of the underlying algorithms and physics as well as the actual implementation of those ideas. The second purpose is to present an additional method of reconstruction which takes the already existing drift chamber reconstruction output and further refines it so that interactions which arise from a rastered beam may be reconstructed more accurately.
It should be noted that, in the context of this thesis, the primary motivation behind the development of this new reconstruction algorithm is to allow for the reconstruction of events from a physics target that contains two polarized target regions separated by some distance. More details will be given about the target in the Simulation and Reconstruction chapter. Semi-Inclusive Deep Ineslatic Scattering (SIDIS) events are of particle interest. SIDIS events only consider the scattered incident particle, an electron in this case, and the highest energy hadron produced by the interaction, if any. The core idea is that when the scattering particle interacts with a parton in the struck particle, there will be a primary product hadron accompanied by a hadron shower. The kinematics of this primary product and the scattering particle are believed to encode most of the information about the struck parton.
IMG
Theory
This chapter is meant to provide sufficient knowledge and background to understand the general workings of the CLAS12 drift chambers, the most critical elements of the reconstruction process, and the rasterization of the scattering electrons. With regards to the drift chambers, particular attention is paid to the geometry and physical construction as that directly effects the acceptance of the detector. The elements of the most importance discussed here are the concepts of Runge-Kutta approximations and Kalman Fitters. Both are used heavily in the reconstruction process. And finally, the discussion about rastered beams gives background as to why this project exists to begin with.
Drift Chambers
The CLAS12 drift chambers are designed to determine the momentum of charged particles moving through them. The following sections provide a brief overview of drift chambers in general.
Typical Construction and Principle of Operation
Drift chambers are charged particle detectors that are designed to measure the trajectory of a particle's path through the detector. This is the primary measurement used to calculate quantities of scientific interest by means of inverse kinematics.
Typically, drift chambers are constructed out of a chamber filled with an ionizable gas and planes of conducting wire. The wire planes are arranged to be ideally perpendicular to the particle's direction of motion and come in two alternating types: cathode planes and anode (sense) planes. The planes are designed to create a nearly-uniform static electric field which will drive electrons towards the sense planes. As described in the next paragraph, the sense wires collect ionized electrons, producing an electronic signal. That electronic signal propagates in both directions along the sense wire and away from the location of the ionization. The signal's propagation is measured by a pair of coupled timers at either end of the wire. It is common to have multiple sets of layers which are oriented differently, though still perpendicularly to the direction of motion, to increase the accuracy of the reconstructed trajectories.\autocite{sauli77}
The general principle of producing an electronic signal in a drift chamber is as follows. A charged particle moves through the drift chamber and ionizes atoms along its path. The electrons, which have been separated from these atoms, are now accelerated towards the anode wires (sense wires). If the anode wire is very thin, the electric field near to it becomes very strong, causing the electrons to pick up a great deal of speed and cause a Townsend avalanche. A Townsend avalanche is a cascading series of ionizations which help to amplify the signal.\autocite{leo87} This signal being a voltage pulse traveling towards either end of the wire from the point where the electrons hit. Using the coupled timers at the ends, it is then possible to use the difference in time of signal propagation to calculate the position along the wire of the hit.
This position along the axis of the wire is only one dimensional information about a particle traveling through 3D space. It is, however, possible to couple the timing information from multiple nearby sense wires and a measurement of the time the liberated electrons take to reach the sense wire to calculate the distance of closest approach (DOCA) to each of them. This then gives a position along the axis of each wire as well as a radius perpendicular to that axis at that point. If all of this information is known perfectly, then the vector component of the path which lies in the plane of a circle defined by the aforementioned radius will be tangent to that circle. Combining that information with the change in hit position along the axis of each wire allows for the ultimate measurement of the particle's path through the detector.
Addition of a Magnetic Field
The inclusion of a magnetic field into a drift chamber allows for the reconstruction of not just the path of the particle, but also the magnitude of its momentum. A uniform magnetic field perpendicular to the particle's direction of motion, for example, would cause the path to bend into some section of a circle, thus changing the expected hit position along the wires of the sense planes. Using these hits, it is then possible to reconstruct the radius of curvature of the path. With the assumption the particle carries the usual elementary charge, it is then possible to calculate the particle's momentum by equating the central force and magnetic force as shown in equation \ref{eq:momentumFromForces}.
\begin{equation} \frac{mv^2}{r}=qvB\implies{}mv=qBr \label{eq:momentumFromForces} \end{equation}
CLAS12
CLAS12's drift chambers are the primary source of path information for interaction reconstruction within the entire detector suite and are the focus of this document. As such, it is important to make as clear as possible their construction.
CLAS12's Drift Chamber Construction
The CLAS12 Detector's drift chamber is broken up into 18 separate sub-chambers as shown in figure \ref{fig:regionsAndSectors}. There are six chambers in each of three regions which are approximately 2.1, 3.3, and 4.5 meters away from the physics target position. Within each region, the six triangular chambers are arranged to be in symmetric sectors around the beam axis with angular coverage between 5\textdegree{} and 40\textdegree{} as measured at the target location with respect to the beam axis. The design of the detector allows for a momentum resolution of less than one percent.\autocite{dcSpecSheet}
\begin{figure} \includegraphics[width=\textwidth]{./Img/DAF-19-03-24-DC-RegionsAndSectors.png} \caption{The three regions and six sectors per region in the drift chambers.\autocite{clas12DriftChambers}} \label{fig:regionsAndSectors} \end{figure}
Within each region and sector are two superlayers of wire planes which are arranged such that the axes of the wires lie in parallel planes that are rotated by 12\textdegree{} with respect to each other. Within each superlayer are 6 layers of sense wires. Each layer has 112 sense wires which have a hexagonal arrangement of cathode wires around them. These hexagonal arrangements are referred to as cells They grow in size from region 1 to region 3 and provide a spatial resolution of approximately 300\textmu{}m.\autocite{dcSpecSheet}
Magnetic Fields
The CLAS12 detector has two magnetic fields, one solenoidal field at the target and one toroidal field centered around the second drift chamber region. The solenoidal field points primarily in the direction of the beamline with a central value of 5 Tesla, and is designed to prevent Moller electrons, a primary background, from entering the detector.\autocite{solenoidSpecSheet} The toroidal field is actually the result of six smaller coils, one in each sector, which all contribute to create a field that is primarily in the phi (azimuthal about the beam line) direction.\autocite{toroidSpecSheet} This is designed to bend particles either into or away from the beam line through the drift chambers which allows for the reconstruction of the particle's momentum. See figure \ref{fig:magneticFields}.
\begin{figure} \includegraphics[width=\textwidth]{./Img/DAF-19-03-24-DC-MagneticFieldMagnitude.png} \caption{The strength of both the solenoidal (left) and toroidal (center) magnetic fields.} \label{fig:magneticFields} \end{figure}
Track Reconstruction Elements
This section describes the theory behind and implementation of several of the more opaque components of the reconstruction algorithms.
Coordinate Systems
The collaboration has defined two coordinate systems in addition to the lab system to reduce the complexity of the reconstruction process within the drift chambers. The lab coordinate system is defined by a right-handed system such that the positive y-direction is upwards, against the pull of gravity and the positve z-direction is down the beam line. This results in the positive x-direction bisecting what is known as sector one as shown in figure \ref{fig:labCoordinateSystem}.
\begin{figure} \includegraphics[width=\textwidth]{./Img/DAF-19-03-24-DC-LabCoordinateSystem.png} \caption{The lab coordinate system and sectors as seen when looking down the beam line.} \label{fig:labCoordinateSystem} \end{figure}
The sector coordinate system is defined by rotating the sector of interest into the position of sector one. This rotation will naturally be some multiple of 60\textdegree{} and will only take into account $\pm{}30$\textdegree about the 0\textdegree angle. The tilted sector coordinate system takes a sector coordinate system and rotates it about the y-axis by 25\textdegree{} such that the z-direction is now perpendicular to the plane of the drift chambers. Transformations between these different coordinate systems are typically handled by rotation matrices. The matrices \ref{mat:rotx}, \ref{mat:roty}, and \ref{mat:rotz} operate on vectors to rotate about the x, y, and z axes respectively. \autocite{arfken13}
\begin{equation} \begin{bmatrix} 1 & 0 & 0\\ 0 & cos(\theta) & -sin(\theta)\\ 0 & sin(\theta) & cos(\theta) \label{mat:rotx} \end{bmatrix} \end{equation} \begin{equation} \begin{bmatrix} cos(\theta) & 0 & -sin(\theta)\\ 0 & 1 & 0\\ sin(\theta) & 0 & cos(\theta) \label{mat:roty} \end{bmatrix} \end{equation} \begin{equation} \begin{bmatrix} 1 & 0 & 0\\ 0 & cos(\theta) & -sin(\theta)\\ 0 & sin(\theta) & cos(\theta) \label{mat:rotz} \end{bmatrix} \end{equation}
It follows then that to transform from the tilted sector coordinate system to the sector coordinate system, one would use matrix \ref{mat:roty} with $\theta=25\degree{}$. Likewise, to transform from the sector coordinate system for sector n to the lab coordinate system, one would use matrix \ref{mat:rotz} with $\theta=(n-1)30\degree{}$.
Runge-Kutta Approximation
The Runge-Kutta approximations are a family of numerical methods used to approximate solutions to ordinary differential equations. The simplest member of this family is Euler's method as shown in equations \ref{eq:eulerMethod:1} and \ref{eq:eulerMethod:2}. Here it is important to see that $f(t,y)=\frac{df}{dt}$ and h is the step size in t. It is straightforward to see that it is possible to achieve an arbitrary level of accuracy by allowing h to be arbitrarily small.\autocite{griffiths10}
\begin{equation} x_{n+1}=x_{n}+hf(t_{n},y{n}) \label{eq:eulerMethod:1} \end{equation} \begin{equation} t_{n+1}=t_{n}+h \label{eq:eulerMethod:2} \end{equation}
However, it is often either not possible or not feasible to allow h to become arbitrarily small. This has motivated the use of the higher order Runge-Kutta methods as they are more algorithmically efficient at giving approximations of the same quality. This allows for larger step sizes and thus fewer steps. Of these methods, the 4th order method, often referred to simply as "the Runge-Kutta method," is used in the drift chamber reconstruction process. At its core, the 4th order method is a weighted average of 4 different approximations as seen in equations \ref{eq:rungeKutta:1} through \ref{eq:rungeKutta:6}.\autocite{griffiths10}
\begin{equation} k_1=f(t_n,x_n) \label{eq:rungeKutta:1} \end{equation} \begin{equation} k_2=f(t_n+\frac{h}{2},x_n+h\frac{k_1}{2}) \label{eq:rungeKutta:2} \end{equation} \begin{equation} k_3=f(t_n+\frac{h}{2},x_n+h\frac{k_2}{2}) \label{eq:rungeKutta:3} \end{equation} \begin{equation} k_4=f(t_n+h,x_n+hk_3) \label{eq:rungeKutta:4} \end{equation} \begin{equation} x_{n+1}=x_n+\frac{h}{6}(k_1+2k_2+2k_3+k_4) \label{eq:rungeKutta:5} \end{equation} \begin{equation} t_{n+1}=t_n+h \label{eq:rungeKutta:6} \end{equation}
It is important to note here that all members of the Runge-Kutta family do accumulate errors as they step through more iterations. The error in a single step, called the Local Truncation Error, is of the order $O(h^{5})$, and the accumulated error is of the order $O(h^{4})$.\autocite{griffiths10}
Jefferson Lab's software team has implemented the 4th order algorithm into an object called a Swimmer. This Swimmer is given maps of the magnetic fields and a state vector which incudes the position, momentum, and charge of the particle of interest. The swimmer can then be called upon to swim (step) the particle until it meets certain position criteria such as a particular value along the z-axis.
Kalman Fitter
The Kalman Fitter is is built on the concept of a Kalman Filter. A Kalman Filter is a statistical algorithm which combines a hypothetical model with measurements to find the Best Linear Unbiased Estimate (BLUE) of an evolved system. Included in this section is a description of the Kalman Filter process, a summary of the mathetmatics of it, and a discussion of the implementation used in the Hit Based and Time Based Reconstruction algorithms.
In general, the Kalman Filter Algorithm proceeds as follows. First, the initial state vector of interest and it's associated uncertainties are evolved until there is a measurement available to compare against. The algorithm then compares the predicted and and measured values and takes a weighted average of them wherein more weight is given to values with smaller uncertainties. The state vector is updated and the algorithm repeats. It is worth mentioning that the Kalman Filter is efficient in the sense that there is no need to keep track of old state vectors. There is instead a covariance matrix which is updated every step.\autocite{miller87}
A summary of the mathematics of the Kalman Filter is layed out in equations \ref{eq:kalman:1} through \ref{eq:kalman:16}. It is quoted from An Introduction to Kalman Filtering with Applications by Miller and Leskiw.\autocite{miller87}
Let $\pmb{x}(t)$ be a state vector which satisfies the deterministic ordinary differential system \begin{equation} \pmb{\dot{x}}(t)=F(t)\pmb{x}(t) \label{eq:kalman:1} \end{equation} \begin{equation} \pmb{x}(t_{0})=\pmb{x}_{0} \label{eq:kalman:2} \end{equation} Then in the Kalman theory we have:
\textbf{System model} \begin{equation} \pmb{\dot{X}}(t)=F(t)\pmb{X}(t)+\pmb{V}(t) \label{eq:kalman:3} \end{equation} \begin{equation} \pmb{X}(t_{0})=\pmb{\hat{x}}_{0} \label{eq:kalman:4} \end{equation} where $\pmb{V}(t)$ has mean zero and cross-covariance matrix $S(t,s)=B(t)\Omega(t)B'(t)\delta(t-s)$ where $\Omega(t)$ is a positive definite spectral density matrix and $B(t)$ is a square matrix. The random initial condition $\pmb{\hat{x}}_{0}$ has mean $\pmb{x}_{0}$ and positive definite covariance matrix $P_0$.
\textbf{Measurement model} \begin{equation} \pmb{Z}_{k}=H_{k}\pmb{x}(t_{k})+\pmb{v}_{k}, k=1,2,... \label{eq:kalman:5} \end{equation} where $\pmb{v}_{k}$ has mean zero and positive definite covariance matrix $R_{k}$. It is assumed that $\pmb{V}(t)$, $\pmb{\hat{x}}_{0}$, and $\pmb{v}_{k}, k=1,2,...$ are independent.
\textbf{State estimate propagation} \begin{equation} \pmb{\dot{\hat{x}}}(t)=F(t)\pmb{\hat{x}}(t), t_{0}\leq{}t<t_{l} \label{eq:kalman:6} \end{equation} \begin{equation} \pmb{\hat{x}}(t_{0})=\pmb{\hat{x}}_{0} \label{eq:kalman:7} \end{equation} and \begin{equation} \pmb{\dot{\hat{x}}}=F(t)\pmb{\hat{x}}(t) \label{eq:kalman:8} \end{equation} has initial condition $\pmb{\hat{x}}(t_{k})$ in $[t_{k}, t_{k+1},...), k=1,2,...$
\textbf{Error covariance propagation} \begin{equation} \dot{P}(t)=F(t)P(t)+P(t)F'(t)+B(t)\Omega(t)B'(t), t_{0}\leq{}t<t_{l} \label{eq:kalman:9} \end{equation} \begin{equation} P(t_{0})=P_{0} \label{eq:kalman:10} \end{equation} and \begin{equation} \dot{P}(t)=F(t)P(t)+P(t)F'(t)+B(t)\Omega(t)B'(t) \label{eq:kalman:11} \end{equation} has initial condition $P(t_{k})$ in $[t_{k}, t_{k+1},...), k=1,2,...$
\textbf{State estimate update} \begin{equation} \pmb{\hat{x}}(t_{k})=(I-K_{k}H_{k})\pmb{\hat{x}}(t^{-}_{k})+K_{k}\pmb{Z}_{k} \label{eq:kalman:12} \end{equation} \begin{equation} =P(t_{k})[P^{-1}(t^{-}_{k})\pmb{\hat{x}}(t^{-}_{k})+{H'}_{k}{R_{k}}^{-1}\pmb{Z}_{k}], k=1,2,... \label{eq:kalman:13} \end{equation}
\textbf{Error covariance update} \begin{equation} P(t_{k})=[P^{-1}(t^{-}_{k})+{H'}_{k}{R^{-1}}_{k}H_{k}]^{-1} \label{eq:kalman:14} \end{equation} \begin{equation} =(I-K_{k}H_{k})P(t^{-}_{k}), k=1,2,... \label{eq:kalman:15} \end{equation}
\textbf{Gain matrix} \begin{equation} K_{k}=P(t_{k}){H'}_{k}{R^{-1}}_{k}, k=1,2,... \label{eq:kalman:16} \end{equation}
\bigskip This is implemented in the Fitter as a method of refining a trajectory provided by calculations done on the raw hits. In essence, this method takes the initial state vector and steps it to each drift chamber region using the Runge-Kutta based Swimmer discussed in the previous section. At each region, the Kalman Filter is used to refine the trajectory. After finishing with the third region, the algorithm then steps back to near the starting location of the inital state vector and repeates the process up to 30 times. The algorithm will exit early if the \textchi{}\textsuperscript{2} is sufficiently small.
Rastered Beam
A rastered beam is the intentional movement of the incident electrons, ideally translating without rotation, over the surface of the target. The primary purpose of this is to distribute the energy deposited into the target more uniformly and avoid localized target heating that would result in physical changes to the target. In the case of a cryogenic target, overheating can cause localized target boiling, reducing the effective length of the target, and result in a lower luminosity of the experiment. In the case of a polarized target, overheating can impact the ability of the target to be polarized resulting in a need to recover target polarization through annealing.
As already stated, the ideal case is a raster that purely translates the incident electrons in the plane perpendicular to the electron momentum without any rotation. The underlying assumption used to determine the kinematics of an observed scattering event is that the incident electrons were directed parallel to a coordinate axis. In the CLAS12, the reference coordinate system aligns the z-axis such that it points parallel to the incident electrons momentum. The scattered electron's azimuthal angle with respect to this axis is used to quantify kinematic variables in the reaction and the cross section. Most notably, this angle is used to determine the amount of four-momentum that was transferred to the target. It follows that an angular offset in the incident electron beam would be an error that propagates to the measured kinematics and cross-sections.
Unfortunately, it is nigh impossible to be able to achieve pure translation in a real lab setting. It is preferable then to simply minimize the changes in incident angle to such a degree that the contribution to systematic errors are minimal. In the common case of using a dipole magnet to create raster behavior, it is sufficient that the angular size of the target relative to the magnets position be quite small. This ensures that any particle that hits the target will also have a small relative change in angle compared to any other particle.
Simulation and Reconstruction
This chapter describes the steps taken to determine the efficacy of the proposed reconstruction algorithm. Included are a description of the physical setup of the actual polarized SIDIS dual target, a description of the simulations done to gather test data, and a discussion of both the current and proposed reconstruction algorithms. The ultimate goal of the proposed reconstruction algorithm is to be able to confidently state which of the two subtargets any given event spawned from.
Physical Setup
Figure \ref{fig:forestTarget} provides a general description of the target used in the simulation. In the simulation, the setup is centered in the middle of the solenoid magnet with a gap of four centimeters between the two targets. Each target is two centimeters long and has a radius of one centimeter. Each of the targets can be independently poalarized and may contain either NH\textsubscript{3} or ND\textsubscript{3}.
\begin{figure} \includegraphics[width=\textwidth]{./Img/TF_RGC_DualPoltTargFig_11-16-2017.png} \caption{A description of the general geometry of the polarized dual target.\autocite{forestTarget}} \label{fig:forestTarget} \end{figure}
Simulation
All of the data used to analyze the efficacy of the proposed reconstruction algorithm was generated using GEMC version 4a.2.4. GEMC\autocite{gemc} (GEant4 Monte-Carlo) takes in configuration files called GCards. These GCards direct the program to load in certain tagged releases of the files (geometry, magnetic fields, etc) required to simulate the desired experiment. The CLAS12 software team has created a repository containing various different revisions of tags which can be used.\autocite{clas12Tags} A sample GCard can be found in Appendix A.
Additionally, the GCards allow the user to set certain parameters within the simulation. Of particular importance are scaling the magnetic fields and specifying the manner in which the simulated particles are generated. The magnetic field scales are set by simply passing in the desired scale factor. The event generator is somewhat more complicated. It is broken into four different components: beam momentum, momentum spread, beam position, and position spread. The beam momentum has four parameters: simulated particle, momentum magnitude, momentum polar angle, and momentum azimuthal angle. The momentum spread has three parameters: plus or minus momentum spread, plus or minus polar angle, and plus or minus azimuthal angle. The beam position has three parameters: x-position, y-position, and z-position. the position spread has two parameters: spread radius in the x-y plane and z-position spread. A full description of all available options can be found at https://gemc.jlab.org/gemc/html/documentation/options.html.
It should be noted here that the parameters for the momentum were provided by Dr. Tony Forest\autocite{forestTarget} and that they are indicative of the expected kinematic range of the actual experiment.
\begin{table}[!ht] \centering \begin{tabular}{|c|c|c|c|c|} \hline & e- In Up & e- In Down & e- Out Up & e- Out Down \\ \hline Toroid Scale & -1.0 & -1.0 & 1.0 & 1.0 \\ \hline $|P|$ & $4.75\pm{}3.25GeV$ & $4.75\pm{}3.25GeV$ & $4.75\pm{}3.25GeV$ & $4.75\pm{}3.25GeV$ \\ \hline $\phi$ & $22.5\pm{}17.5\degree$ & $22.5\pm{}17.5\degree$ & $22.5\pm{}17.5\degree$ & $22.5\pm{}17.5\degree$\\ \hline $\theta$ & $0.0\pm{}180.0\degree$ & $0.0\pm{}180.0\degree$ & $0.0\pm{}180.0\degree$ & $0.0\pm{}180.0\degree$\\ \hline $X$ & $0.0cm$ & $0.0cm$ & $0.0cm$ & $0.0cm$ \\ \hline $Y$ & $0.0cm$ & $0.0cm$ & $0.0cm$ & $0.0cm$ \\ \hline $R$ & $1.0cm$ & $1.0cm$ & $1.0cm$ & $1.0cm$ \\ \hline $Z$ & $-3.0\pm{}1.0cm$ & $3.0\pm{}1.0cm$ & $-3.0\pm{}1.0cm$ & $3.0\pm{}1.0cm$ \\ \hline \hline & \textpi{}+ Up & \textpi{}+ Down & \textpi{}- Up & \textpi{}- Down \\ \hline Toroid Scale & 1.0 & 1.0 & -1.0 & -1.0 \\ \hline $|P|$ & $1.8\pm{}1.0GeV$ & $1.8\pm{}1.0GeV$ & $1.8\pm{}1.0GeV$ & $1.8\pm{}1.0GeV$ \\ \hline $\phi$ & $45.0\pm{}45.0\degree$ & $45.0\pm{}45.0\degree$ & $45.0\pm{}45.0\degree$ & $45.0\pm{}45.0\degree$\\ \hline $\theta$ & $0.0\pm{}180.0\degree$ & $0.0\pm{}180.0\degree$ & $0.0\pm{}180.0\degree$ & $0.0\pm{}180.0\degree$\\ \hline $X$ & $0.0cm$ & $0.0cm$ & $0.0cm$ & $0.0cm$ \\ \hline $Y$ & $0.0cm$ & $0.0cm$ & $0.0cm$ & $0.0cm$ \\ \hline $R$ & $1.0cm$ & $1.0cm$ & $1.0cm$ & $1.0cm$ \\ \hline $Z$ & $-3.0\pm{}1.0cm$ & $3.0\pm{}1.0cm$ & $-3.0\pm{}1.0cm$ & $3.0\pm{}1.0cm$ \\ \hline \end{tabular} \caption{Descriptions of the non-default parameters used generate the eight data sets. Inbending and outbending are shortened to In and Out respectively. Upstream and Downstream are shortened to Up and Down respectively.} \label{tab:gemcParams} \end{table}
Four pairs of data sets were generated for this analysis. Each pair consists of one 5000 event data set generated uniformly within each subtarget and was later combined to make a single 10000 event data set. The four ultimate data sets are labled as such: electron-inbending, electron-outbending, postive pion, and negative pion. The two electron set parameters differ only by the scale of the toroid. In one case it is set such that electrons bend into the beam line and in the other case they bend away. In the case of the pions, the toroid scale is set such that particles always bend into the beamline. Table \ref{tab:gemcParams} shows all of the non-default parameters used to generate each data set.
After GEMC has been run, the resulting EVIO (Event Input Output) files must be converted to HIPO (High Performance Output) files. This is done by calling one of the utilities packaged in clas12-offline-software, evio2hipo. Once that is done, the upstream and downstream components of each pair of data sets are merged using another clas12-offline-software program, hipo-utils. Examples for running all of these can be found in Appendix A. All that remains then is to run the HIPO files through the two pre-existing reconstruction algorithms and then the proposed algorithm.
Current Reconstruction Algorithms
There are currently two drift chamber reconstruction algorithms. The first, called Hit Based, feeds into the second, called Time Based. The following paragraphs are brief descriptions of both of them. Appendix B holds lengthy descriptions of both algorithms.
The Hit Based algorithm is the first pass at reconstruction and is largely used to lay groundwork for the Time Based algorithm. In broad strokes, the Hit Based algorithm works as by first collecting all of the drift chamber hits and grouping them by their superlayer. Those groups are then processed and a linear fit is attempted. That fit is then used to find a position-momentum state vector between the superlayers of each region. Finally, a combination of a Kalman Fitter and the Runge-Kutta based Swimmer are used to reconstructed the trajectory of the particle and, thus, the interaction vertex.
The Time Based algorithm works by taking the tracks already reconstructed by the Hit Based reconstruction and refining them further. First, the groups of hits are regathered and some of the likely-extraneous hits are culled using timing information. Those groups then undergo another test to check whether or not the entire group should be thrown out or not. The Kalman-Fitter and Swimmer are then used again to find the new trajectories and interaction vertices.
Both of these reconstruction algorithms are run on each HIPO file by calling the notsouseful-util program that is packaged as a part of clas12-offline-software. Once again, an example of of the command to call the program can be found in Appendix A.
Proposed Reconstruction Algorithm
The Raster Based algorithm takes the output of both the Time and Hit Based algorithms and attempts to fit them to an off-axis beam position. This is achieved by performing a grid search through a small region of momentum space about a state vector provided by older algorithms. The actual vertex is found by swimming all of these slightly varied state vectors to the beam line and calculating the DOCA. Please note that the source code for this is found in its entirety in Appendix C.
This program currently only exists in a fork off of Jefferson Lab's official GitHub repository. The address for this is https://github.com/friadavi/clas12-offline-software. Should the reader wish to run the program, they may run the fritracking program which is packaged with the rest of the clas12-offline-software in the aforementioned fork. An example of calling the program may be found in Appendix A.
For this analysis, the data sets reconstructed by the Hit and Time based algorithms were reconstructed multiple times while the two grid search parameters, span and samples, were varied independently. The span was varied, inclusively, between values of 0.001 and 0.015 (0.1 and 1.5\%). The number of samples was varied, again inclusively, between 3 and 11. This allows for an analysis of the reconstruction as a function of the two independently.
Parameters
This algorithm takes a variety of input parameters which control both the extent and granularity of the two searches involved as well as various other variables. These parameters are listed and explained below.
\begin{description} \item[Input]The name and path to the input file to reconstruct. \item[Output]The name and path of the output file. Any preexisting file will be overwritten. \item[Engine]The output from which engine to reconstruct. The options at present are Hit Based and Time Based. In the event that this parameter is not specified, the algortihm will preferentially take the Time Based if available, and Time Based if not. \item[Solenoid]The scale factor of the magnetic field generated by the solenoid. \item[Toroid]The scale factor of the magnetic field generated by the toroid. \item[Grid Search Span]The span in each Cartesian dimension in momentum space about which the grid search will take place. This value is provided as a percentage such that the span of the search is $p_{x,y,z}(1\pm{}value)$. \item[Grid Search Samples]The number of evenly spaced sample points to take in each momentum dimension. This includes the edges of the span, as such this number should never be less than two. It is recommended to always use an odd number, that way the starting value is included in the search. \item[Vertex Search Upper Bound]The upper bound in the z-direction in which to search for the DOCA to the beam position. \item[Vertex Search Lower Bound]The lower bound in the z-direction in which to search for the DOCA to the beam position. \item[Vertex Samples]The number of evenly spaced sample points to take in each iteration of the vertex search. This includes upper and lower bounds, as such this number should not be less than two. \item[Vertex Iterations]The number of recursive iterations to go through in the vertex search. \item[Threads]The maximum number of threads that can be used in the reconstruction. \end{description}
Overarching Algorithm
The algorithm begins by parsing commandline options; it stop the reconstruction if any errors occur. After that, the magnetic field maps are initialized and scaled if necessary. Then, for each event within the input file, the reconstruction algorithm is run and the output, if any, is written to the output file.
Reconstruction
This begins by reading in whichever output banks are required, as dictated by the Engine commandline parameter. Then the beam position is read in from the simulation bank. If the event is real data, then the beam position is assumed to be at $x=0, y=0$. There is currently an assumed radius of error of 0.5mm in the beam position. The algorithm then collects a state vector output by the Hit Based and Time Based algorithm. This state vector's position is just upstream of the region 1 drift chamber and will be the starting point of the search. The grid search is run and the output is written to the data banks.
Grid Search
This sub-algorithm starts by computing the upper and lower bounds of the momentum space in which to search. Arrays to store the output of each search point are then created and a ThreadPool is established to allow for the multithreading of the vertex searches for each grid search sample point. A vertex search job thread is then submitted for every sample point in the three dimensional momentum search space. For the Grid Search Samples value n, there are $n^{3}$ sample points, each evenly spaced in all three dimensions.
After all submitted jobs have been completed, the algorithm searches through all of the outputs and identifies interaction vertices which are either within the beam line's margin of error or a local minimum. All other points are culled. Any points whose interaction vertex z value is outside of the vertex search bounds is culled. Finally, the algorithm searches through all surviving interaction vertices and selects for output the one whose initial starting momentum deviated the least from the Hit Based or Time Based momentum.
Vertex Search
This sub-algorithm begins by swimming from the state vector provided by the grid search to each sample z value within the search region. The samples points are evenly spaced throughout the region and include the boundaries. The swim outputs are collected, and the DOCA for each one is calculated. All local minima amongst the DOCAs are added to a list. If this is the last iteration, the algorithm returns the output which corresponds to the smallest DOCA, failing gracefully if no swims performed successfully.
In the event that there are more iterations to go, this algorithm recursively calls itself on each local minima. The bounds of the recursive search search are as follows: if the minimum is at the a boundary of the search region it is recentered on the boundary without changing the actual distance spanned and the function is called without decrementing the iteration count. Otherwise, the search region is recentered on the point associated with the local minimum and the boundaries are shrunk to coincide with the nearest neighbor sample points. The iteration count is decremented and the function is called. The output of each new search is collected and the one with the smallest DOCA is returned.
Analysis
Before moving into the analysis proper, it is worth summarizing what exactly is being analyzed and to what end. Four data sets, each initially of 10,000 events, have been generated and run through the Hit Based and Time Based reconstruction algorithms. The ouputs of the old reconstructions were then fed into the proposed reconstruction multiple times while varying the number of samples used and the span of the search. These data sets consist of events that were generated within the volume of the subtargets of the polarized SIDIS dual target. Two of the data sets consist of electrons, one with an inbending toroid configuration and one with an outbending configuration. The other two data sets are \textpi{}+ and \textpi{}- particles, each with a toroid configuration which will cause them to bend into the beam line. The goal is to be able to reconstruct the interaction vertex of each particle well enough that one could say with confidence which of the two subtargets it was generated in.
The analysis will be presented as follows: for each of the four data sets, graphs formed from the one Raster Based reconstruction output which is included in both the set generated by varying the span and the set generated by varying the number of samples will be offered forth as being indicative of the general behavior of their data set. The values for the span and number of samples that were used to produce these graphs are 0.005 (0.5\%) and seven, respectively. These are also the values at which one parameter will be held constant while the other is varied. After that a number of graphs will be presented which indicate trends, or the lack thereof, amongst the varied parameters.
\section{e- Inbending} To begin, examine figures \ref{fig:e-InDeltaXHist} and \ref{fig:e-InDeltaYHist}. There is clearly a marked improvement in the quality of reconstruction in both the x and y directions. Figure \ref{fig:e-InDeltaZHist} shows an improvement in the reconstruction in the z direction, though decidedly less impressive than the x and y case. Of note here is the \textsigma of the gaussian fit to the Raster Based reconstruction. Its value is approximately $\frac{1}{5}$ of the 4cm distance between the two subtargets. This suggests that this particular set of parameters puts the reconstruction right on the cusp of 5\textsigma confidence in being able to tell from which subtarget particles were generated.
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/7-/S-T-electron7-0.005DeltaXHist}.png} \caption{The difference between the generated and reconstructed x positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for inbending electrons} \label{fig:e-InDeltaXHist} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/7-/S-T-electron7-0.005DeltaYHist}.png} \caption{The difference between the generated and reconstructed y positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for inbending electrons} \label{fig:e-InDeltaYHist} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/7-/S-T-electron7-0.005DeltaZHist}.png} \caption{The difference between the generated and reconstructed z positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for inbending electrons} \label{fig:e-InDeltaZHist} \end{figure}
Also worthy of note here is the difference in counts for each algorithm. It would seem that, for the distribution of particles generated, the HB/TB reconstruction is approximately 75\% efficient and the Raster Based reconstruction is approximately 66\% efficient on top of that. This leads to an ultimate efficiency of approximately 50\%.
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/S-T-electron7-0.005SampleDeltaZ}.png} \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the number of grid search samples for inbending electrons} \label{fig:e-InDeltaZSamplesMeta} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/S-T-electron7-0.005SampleEfficiency}.png} \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the number of grid search samples for inbending electrons} \label{fig:e-InSamplesEff} \end{figure}
Figure \ref{fig:e-InDeltaZSamplesMeta} shows that the number of samples has very little effect on the reconstruction while figure \ref{fig:e-InSamplesEff} shows that increasing the number of samples does tend to increase the reconstruction efficiency by a small amount.
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/S-T-electron7-0.005SpanDeltaZ}.png} \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the grid search span for inbending electrons} \label{fig:e-InDeltaZSpanMeta} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/electron/S-T-electron7-0.005SpanEfficiency}.png} \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the grid search span for inbending electrons} \label{fig:e-InSpanEff} \end{figure}
Figure \ref{fig:e-InDeltaZSpanMeta} shows that there is a general trend towards a larger \textsigma{} as the search span is increased. Additionally, figure \ref{fig:e-InSpanEff} shows that the efficiency also trends generally positive as the span increases. A potential reason for both of these will be explored in the conclusion.
\section{e- Outbending} Comparing figures \ref{fig:e-OutDeltaXHist} through \ref{fig:e-OutSpanEff} with their corresponding figures \ref{fig:e-InDeltaXHist} through \ref{fig:e-InSpanEff} will show largely identical results, albeit with the outbending situation having a generally higher efficiency.
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/7-/S-T+electron7-0.005DeltaXHist}.png} \caption{The difference between the generated and reconstructed x positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for outbending electrons} \label{fig:e-OutDeltaXHist} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/7-/S-T+electron7-0.005DeltaYHist}.png} \caption{The difference between the generated and reconstructed y positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for outbending electrons} \label{fig:e-OutDeltaYHist} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/7-/S-T+electron7-0.005DeltaZHist}.png} \caption{The difference between the generated and reconstructed z positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for outbending electrons} \label{fig:e-OutDeltaZHist} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/S-T+electron7-0.005SampleDeltaZ}.png} \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the number of grid search samples for outbending electrons} \label{fig:e-OutDeltaZSamplesMeta} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/S-T+electron7-0.005SampleEfficiency}.png} \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the number of grid search samples for outbending electrons} \label{fig:e-OutSamplesEff} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/S-T+electron7-0.005SpanDeltaZ}.png} \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the grid search span for outbending electrons} \label{fig:e-OutDeltaZSpanMeta} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/electron/S-T+electron7-0.005SpanEfficiency}.png} \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the grid search span for outbending electrons} \label{fig:e-OutSpanEff} \end{figure}
\section{\textpi{}+} Before looking at any fits, one should first turn their attention to the greatly reduced number of events. This is mainly due to reconstruction failures in the HB/TB algorithms as the efficiency of the Raster Based algorithm is not sufficiently worse to account for the change. The analysis in this section should be taken with a grain of salt due to these reduced statistics. Figures \ref{fig:pi+OutDeltaXHist} and \ref{fig:pi+OutDeltaYHist} show that the resolution in the x and y directions are largely unchanged from the electron reconstructions. Unfortunately, the \textsigma{} of the fit in figure \ref{fig:pi+OutDeltaZHist} has gotten larger and is now solidly between $\frac{1}{5}$ and $\frac{2}{5}$ of the distance between the two subtargets, thus reducing the confidence in stating which target the event came from.
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/7-/S-T+piPlus7-0.005DeltaXHist}.png} \caption{The difference between the generated and reconstructed x positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for \textpi{}+'s} \label{fig:pi+OutDeltaXHist} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/7-/S-T+piPlus7-0.005DeltaYHist}.png} \caption{The difference between the generated and reconstructed y positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for \textpi{}+'s} \label{fig:pi+OutDeltaYHist} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/7-/S-T+piPlus7-0.005DeltaZHist}.png} \caption{The difference between the generated and reconstructed z positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for \textpi{}+'s} \label{fig:pi+OutDeltaZHist} \end{figure}
Once again, as per figures \ref{fig:pi+OutDeltaZSamplesMeta} and \ref{fig:pi+OutSamplesEff}, the number of samples taken has little effect on the resolution in the z direction and the efficiency does increase as the number of samples increases. This time, however, span has a lessened effect on the \textsigma of the z direction, as seen in figure \ref{fig:pi+OutDeltaZSpanMeta}. Figure \ref{fig:pi+OutSpanEff} shows that the efficiency increases as the number of samples increases, just as was the case for the electrons.
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/S-T+piPlus7-0.005SampleDeltaZ}.png} \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the number of grid search samples for \textpi{}+'s} \label{fig:pi+OutDeltaZSamplesMeta} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/S-T+piPlus7-0.005SampleEfficiency}.png} \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the number of grid search samples for \textpi{}+'s} \label{fig:pi+OutSamplesEff} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/S-T+piPlus7-0.005SpanDeltaZ}.png} \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the grid search span for \textpi{}+'s} \label{fig:pi+OutDeltaZSpanMeta} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T+/piPlus/S-T+piPlus7-0.005SpanEfficiency}.png} \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the grid search span for \textpi{}+'s} \label{fig:pi+OutSpanEff} \end{figure}
\section{\textpi{}-} Much as the outbending electron reconstructions were largely identical to the inbending reconstructions, the \textpi{}- reconstructions are largely identical to the \textpi{}+ reconstructions. Note that the fit to the Raster Based \textDelta{}X data is clearly not correct. This is certainly an artifact of the small sample size.
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/7-/S-T-piMinus7-0.005DeltaXHist}.png} \caption{The difference between the generated and reconstructed x positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for \textpi{}+'s} \label{fig:pi-OutDeltaXHist} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/7-/S-T-piMinus7-0.005DeltaYHist}.png} \caption{The difference between the generated and reconstructed y positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for \textpi{}+'s} \label{fig:pi-OutDeltaYHist} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/7-/S-T-piMinus7-0.005DeltaZHist}.png} \caption{The difference between the generated and reconstructed z positions of the interaction vertex for the HB/TB algorithm as well as the Raster Based for \textpi{}+'s} \label{fig:pi-OutDeltaZHist} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/S-T-piMinus7-0.005SampleDeltaZ}.png} \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the number of grid search samples for \textpi{}+'s} \label{fig:pi-OutDeltaZSamplesMeta} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/S-T-piMinus7-0.005SampleEfficiency}.png} \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the number of grid search samples for \textpi{}+'s} \label{fig:pi-OutSamplesEff} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/S-T-piMinus7-0.005SpanDeltaZ}.png} \caption{The mean and \textsigma{} of the gaussian fit to the Raster Based \textDelta{}z data as a function of the grid search span for \textpi{}+'s} \label{fig:pi-OutDeltaZSpanMeta} \end{figure}
\begin{figure}[!ht] \includegraphics[width=\textwidth]{{./Img/Graphs/S-T-/piMinus/S-T-piMinus7-0.005SpanEfficiency}.png} \caption{The reconstruction efficiency of the Raster Based algorithm as a function of the grid search span for \textpi{}+'s} \label{fig:pi-OutSpanEff} \end{figure}
Conclusion
The goal of this project was ultimately to be able to say with confidence from which of the two subtargets within the polarized SIDIS dual target the electrons scattered. As the distance between the two subtargets is four centimeters, it follows that to be able to state with five sigma confidence which of the two an event spawned from, the reconstuction's fit must have a sigma of at most eight millimeters. The analysis in the previous section has shown that there is a set of search parameters which allows for one to state just that with five sigma certainty for electron events. Unfortunately, the best and least efficient case of pion reconstruction can only be claimed with four sigma certainty.
With respect to the relationship between the reconstruction efficiency and the span of the search, it would seem that the Raster Based Algorithm is acting as a filter of sorts. As the span increases, both the sigma of the reconstruction and the efficiency increases. This seems to suggest that broadening the quantity of events which can be successfully reconstructed results in a worse quality of reconstruction. This, in turn, would seem to suggest that the algorithm intrinsically culls out events which are not reconstructed well enough.
Appendices
Appendix A
This appendix contains examples of program calls used throughout the reconstruction process and an example GCard.
Example GCard
Below is an example GCard that could be fed into GEMC.
\begin{lstlisting} <gcard>
<detector name="/jlab/workdir/Thesis/Scripts/detectors/clas12/targets/target" factory="TEXT" variation="PolTarg"/> <detector name="/jlab/workdir/Thesis/Scripts/detectors/clas12/targets/PolTarg/" factory="CAD"/>
<detector name="experiments/clas12/cnd/cnd" factory="TEXT" variation="original"/> <detector name="experiments/clas12/bst/bst" factory="TEXT" variation="java"/> <detector name="experiments/clas12/micromegas/micromegas" factory="TEXT" variation="michel"/>
<detector name="experiments/clas12/ctof/ctof" factory="TEXT" variation="cad"/>
<detector name="experiments/clas12/ctof/javacad/" factory="CAD"/>
<detector name="experiments/clas12/htcc/htcc" factory="TEXT" variation="original"/>
<detector name="experiments/clas12/magnets/solenoid" factory="TEXT" variation="original"/> <detector name="experiments/clas12/magnets/cad/" factory="CAD" />
<detector name="experiments/clas12/ft/ft" factory="TEXT" variation="FTOn"/>
<detector name="experiments/clas12/beamline/cadBeamline/" factory="CAD"/>
<detector name="experiments/clas12/beamline/vacuumLine/" factory="CAD"/>
<detector name="experiments/clas12/beamline/beamline" factory="TEXT" variation="FTOn"/>
<detector name="experiments/clas12/fc/forwardCarriage" factory="TEXT" variation="TorusSymmetric"/>
<detector name="experiments/clas12/dc/dc" factory="TEXT" variation="java"/> <detector name="experiments/clas12/ftof/ftof" factory="TEXT" variation="java"/> <detector name="experiments/clas12/ec/ec" factory="TEXT" variation="java"/> <detector name="experiments/clas12/pcal/pcal" factory="TEXT" variation="java"/> <detector name="experiments/clas12/ltcc/ltcc" factory="TEXT" variation="original"/> <detector name="experiments/clas12/ltcc/cad_cone/" factory="CAD"/> <detector name="experiments/clas12/ltcc/cad/" factory="CAD"/>
<option name="SCALE_FIELD" value="TorusSymmetric, -1.0"/>
<option name="SCALE_FIELD" value="clas12-newSolenoid, -1.0"/>
<option name="HALL_FIELD" value="clas12-newSolenoid"/>
<option name="FIELD_PROPERTIES" value="TorusSymmetric, 2*mm, G4ClassicalRK4, linear"/> <option name="FIELD_PROPERTIES" value="clas12-newSolenoid, 1*mm, G4ClassicalRK4, linear"/>
<option name="BEAM_P" value="e-, 4.75*GeV, 22.5*deg, 0*deg"/>
<option name="BEAM_V" value="(0.0, 0.0, 3.0)cm"/>
<option name="SPREAD_P" value="3.25*GeV, 17.5*deg, 180*deg"/>
<option name="SPREAD_V" value="(1.0, 1.0)cm"/>
<option name="SAVE_ALL_MOTHERS" value="0"/>
<option name="RECORD_MIRRORS" value="1"/>
<option name="PHYSICS" value="FTFP_BERT + STD + Optical"/>
<option name="OUTPUT" value="evio, out.ev"/>
<option name="PRINT_EVENT" value="10" />
<option name="RUNNO" value="11" />
<option name="LUMI_EVENT" value="0, 248.5*ns, 4*ns" />
<option name="RFSETUP" value="0.499, 40, 20" />
<option name="PRODUCTIONCUTFORVOLUMES" value="innerShieldAndFlange, outerFlange, outerMount, nut1, nut2, nut3, nut4, nut5, nut6, nut7, nut8, nut9, taggerInnerShield, main-cone, main-cone, adjuster1, adjuster2, adjuster3, DSShieldFrontLead, DSShieldBackLead, DSShieldInnerAss, DSShieldBackAss, DSShieldFrontAss, DSShieldBackCover, DSShieldFrontCover, DSShieldFlangeAttachment, DSShieldFlange, 100" />
<option name="PRODUCTIONCUTFORVOLUMES" value="connectUpstreamToTorusPipe, connectTorusToDownstreamPipe, downstreamPipeFlange, 100" />
<option name="PRODUCTIONCUTFORVOLUMES" value="BoreShield, CenterTube, DownstreamShieldingPlate, DownstreamVacuumJacket, WarmBoreTube, apex, Shield1, Shield2, Shield3, Shield4, Shield5, Shield6, Shield7, shell1a, shell1b, shell2a, shell2b, shell3a, shell3b, shell4a, shell4b, shell5a, shell5b, shell6a, shell6b, 100" />
</gcard>
\end{lstlisting}
Program Calls
An example of the gemc program call: \begin{lstlisting} bash gemc -USE_GUI=0 -gcard=/some/path/to/input.gcard -N=10000 -DATABASE=/some/path/to/local/database.sqlite -OUTPUT="evio, /some/path/to/output.evio" \end{lstlisting}
An example of the evio2hipo program call: \begin{lstlisting} bash evio2hipo -n 10000 -r 11 -s -1.0 -t -1.0 -o /some/path/to/output.hipo /some/path/to/input.evio \end{lstlisting}
An example of the hipo-utils program call: \begin{lstlisting} bash hipo-utils -merge -o /some/path/to/output.hipo /some/path/to/input1.hipo /some/path/to/input2.hipo \end{lstlisting}
An example of the notsouseful-util program call: \begin{lstlisting} bash notsouseful-util -n 10000 -c 2 -i /some/path/to/input.hipo -o /some/path/to/output.hipo \end{lstlisting}
An example of the fritracking program call: \begin{lstlisting} bash fritracking -s -1.0 -t -1.0 -i /some/path/to/input.hipo -o /some/path/to/output.hipo -x 3 -g 7 0.005 -v 9 2 -z -4.0 4.0 \end{lstlisting}
Appendix B
This appendix contains descriptions of the previously existing drift chamber reconstruction algorithms, Hit Based and Time Based. These descriptions are current as of coatjava release 5c.7.4.
Definition of Terms
There are several terms which must be first understood before reading further: \begin{description} \item[DOCA]Distance of Closest Approach, i.e. the shortest distance between some point in space and some defined path. \item[Hit]A representation of a measurement from an individual wire. \item[Cluster]A representation of groups of physically close hits. \item[Segment]A representation of a linear fit to a cluster. \item[Cross]A representation of a position and momentum unit vector of the particle's trajectory at the midplane between two superlayers as reconstructed from the segments of both superlayers. \item[Trajectory]A general position and momentum vector at some point in space. \item[Track]A representation of a reconstructed path through the detector. \item[Road]A representation of a collection of segments. \item[Swimmer]A software tool utilizing a fourth order Runge-Kutta approximation to step, or "swim", a particle's trajectory through the detector. \end{description}
Hit Based Algorithm
The Hit Based algorithm takes the raw drift chamber hit positions, but not any time information beyond that which gives the position along any given wire, and attempts to perform track reconstruction. In general, this is simply used as a step to then perform the Time Based reconstruction.
Hits to Clusters
Raw hits are all initially sorted into clumps. Clumps are the rawest form of a cluster as they are created by independently taking each superlayer in each sector and adding the hits found therein to the clump. The clump is then processed to remove hits caused by noise. This is determined by whether or not the DOCA for that hit is greater than the size of the cell. The list of clumps is then processed into a list of clusters by removing any clumps which have less than four total hits within them. A linear fit is then performed on the hits which comprise each cluster. The clusters are split into multiple clusters if necessary by means of checking the \textchi{}\textsuperscript{2} of a linear fit. Following that step, the program updates the raw hits with various additional information and stores them as well as the clusters.
Clusters to Segments
Segments are built by first excluding any clusters with more than fourteen hits from consideration. Then, for each remaining cluster, the information from the linear fit is taken from the tests done in the previous step and set in the segment accordingly. Some segments may then be culled if their average DOCA between the raw hit and the cluster fit line is too large.
Segments to Crosses
Crosses are created by looping over all sectors and regions in order and identifying segments in the first superlayer of each region. For each of those segments, it then loops through again to find a corresponding segment in the second superlayer. A check is done to see if the segments are both physically close and have a consistent slope from one to the other. If the segments pass both of those requirements, the cross is created.
Crosses are then arranged into short lists which correspond to possible tracks. To do this, all of the crosses are first sorted into a list containing only crosses from its region. Then, for every possible combination of three crosses such that there is always a cross from each region, the algorithm performs an approximate fit. This fit returns the trajectory of the fit at each of the cross positions. If the angle between any pair of corresponding crosses and trajectories is too great, the triplet of crosses is not considered for a track reconstruction. Those do that pass this test are then checked again. This time a projection of the track onto the x-y plane is checked for linearity. Any triplets remaining are now bases for track candidates.
Crosses to Tracks Part I
The algorithm now takes the triplets of crosses from the previous step and attempts to fit actual tracks from them. For each triplet, the first step is to perform another fit, this time parametric. This fit is formed by constraining the parametric trajectory to be tangent to the momentum unit vector of each cross as it passes through them. Any triplets which fail to produce a satisfactory trajectory are culled. At this point the algorithm splits finding track candidates into two branches. The first is the case where the cross triplet is indeed a full triplet without any members missing, and the torus magnet has been scaled down to a value near zero. This is looking for straight tracks in the torus region. Otherwise the algorithm will search for linked helices, one from the torus region and one from the solenoid.
Straight Tracks
This sub-algorithm first fits the projections of the crosses onto both the x-z and y-z planes to lines. If the fit is ok for each of those, the track is then assumed to be straight and the interaction vertex is assumed to be wherever the linear fits intersect the $z=0$ plane. The particle is also assumed to be a muon. This track candidate is then further refined by passing it through a Kalman fitter. The candidate is then added to the list of track candidates.
Linked Helices
First, track candidates that don't have at least five segments within their crosses are culled. Then four initial fits are performed, one for each combination of one of the region one segments being coupled with one of the region three segments. In these fits, the slopes of the segments are used to calculate the momentum and charge of the particle. Those values are then input into a swimmer which will swim from the region one cross to the x-y plane of the region two cross and then on to the x-y plane of the region three cross. If the \textchi{}\textsuperscript{2} is bad or the integrated magnetic field over the path is zero, the candidate is culled.
The momentum and charge are reconstructed, again, from the second segment of the region 1 and 3 crosses. If the momentum is greater than 11GeV, it is set to 11GeV. A state vector is formed from combining the region 1 cross information with the momentum. A Kalman fitter is created and attempts to fit the crosses. If this fails, the candidate is culled. The \textchi{}\textsuperscript{2} of the fit between the Kalman fit and the crosses is calculated, and the candidate is culled if the value is too large. One final check to see if the Kalman fit either failed or if the final state vector produced by it is done. If passed, the candidate is added to the list of track candidates.
Crosses to Tracks Part II
Now that the sub-algorithms have finished, the overall algorithm now attempts to extrapolate from the existing information to see if any other track candidates can be constructed. First, any tracks overlapping the track with the best \textchi{}\textsuperscript{2} are culled. The algorithm then constructs roads out of all of the currently existing segments. This process, in essence, simply orders the segments in terms of sectors and superlayers then finds sequences of them which could indicate a track. A list of crosses belonging to those roads is then constructed. Missing road segments, called pseudosegments, are extrapolated from the existing segments.
The pseudosegments are added to the preexisting list of segments, and the process of making crosses is undertaken again. These new crosses are combined with the old ones and used to create new cross triplets. These triplets are then fed into the track candidate finder, as per Crosses to Tracks Part I. Once again, overlapping tracks are culled. If any track candidates have survived, they are written to the Hit Based data banks.
Time Based Algorithm
The Time Based algorithm takes the output of the Hit Based Algorithm and includes more time information in order to refine the reconstruction.
Hits to Clusters
This reconstruction begins by recomposing the clusters from the Hit Based Reconstruction and further refining them. First, for each cluster the algorithm removes secondary hits, which are defined as hits that lie within the same layer as another hit and fail a test comparing the relative DOCA's. The clusters are then refit with the aforementioned \textchi{}\textsuperscript{2} test if necessary to ensure that the most accurate clusters have been found. Next, the algorithm attempts to resolve the left-right ambiguity in the cluster's linear fit, i.e. determine on which side of which wires the particle passed. Finally, the clusters and associated hits are updated with this new information.
Clusters to Segments
Each cluster is subjected to a test wherein the average DOCA of the hits within the cluster is calculated and then compared to the average cell size of the cells where those hits were recorded. Clusters which fail this test are culled. The clusters then undergo the same process as the Hit Based clusters. All of the Hit Based tracks are now read into memory and the hit-segment associations are set accordingly.
Reconstructing Tracks
The algorithm now begins track reconstruction, failing if the Hit Based Tracks were not saved correctly. First, all of the Hit Based track information stored in the data banks are read into an array, and all of the segment-track associations are set. The algorithm now considers each track, failing if the track has less than four segments, and creating crosses in the same fashion as the Hit Based algorithm. These tracks are now run through a Kalman fitter. The track is culled if the interaction vertex is too large. Finally, for each surviving track, the overall trajectory is calculated and associated hits and segments are set accordingly.
Appendix C
This appendix contains the source code for the Raster Based reconstruction algorithm. There are some minor cosmetic changes to improve readability in this document.
\begin{lstlisting}[showspaces=false,showstringspaces=false] package org.jlab.service.dc;
import java.io.File; import java.io.IOException; import java.util.ArrayList; import java.util.Arrays; import java.util.concurrent.Callable; import java.util.concurrent.ExecutionException; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; import java.util.logging.Level; import java.util.logging.Logger; import org.jlab.clas.swimtools.MagFieldsEngine; import org.jlab.clas.swimtools.Swim; import org.jlab.clas.swimtools.Swimmer; import org.jlab.io.base.DataBank; import org.jlab.io.base.DataEvent; import org.jlab.io.hipo.HipoDataEvent; import org.jlab.io.hipo.HipoDataSource; import org.jlab.io.hipo.HipoDataSync; import org.jlab.jnp.hipo.schema.SchemaFactory; import org.jlab.rec.dc.Constants;
/**
* This class is intended to be used to reconstruct the interaction vertex from * previously calculated HB/TB and CVT data * * @author friant */
public class DCRBEngine extends DCEngine {
//Global Variables static String engine = ""; static float solVal = -1.0f * -1.0f;//-1.0 multiplier to correct for sign convention mismatch static float torVal = -1.0f; static float zMinGlobal = -10.0f; static float zMaxGlobal = 10.0f; static float percentGridSearch = 0.10f; static int samplesGridSearch = 5; static int iterationsVertex = 2; static int samplesVertex = 5; static int maxThreads = 1; /** * Constructor */ public DCRBEngine() { super("DCRB"); } /** * Main method * * @param args */ public static void main(String[] args){ //Ensure that RasterBased Data Banks are defined in the program SchemaFactory fact = new SchemaFactory(); fact.initFromDirectory("CLAS12DIR", "etc/bankdefs/hipo"); //Hipo Reader and Writer HipoDataSource reader = new HipoDataSource(); HipoDataSync writer = new HipoDataSync(fact); //Files File inputFile; File outputFile; //Command Line Options boolean help = false; //Required String input = ""; String output = ""; //Optional String gridSearchSamples = ""; String gridSearchPercent = ""; String solenoid = ""; String toroid = ""; String vertexSamples = ""; String vertexIterations = ""; String threads = ""; String lowerBound = ""; String upperBound = ""; //Parse for(int i = 0; i < args.length; i++){ switch(args[i]){ case "-i": if(i + 1 < args.length){ input = args[i + 1]; } else{ System.out.println("Missing Parameter For Option -i"); return; } break; case "-o": if(i + 1 < args.length){ output = args[i + 1]; } else{ System.out.println("Missing Parameter For Option -o"); return; } break; case "-e": if(i + 1 < args.length){ engine = args[i + 1]; } else{ System.out.println("Missing Parameter For Option -e"); return; } break; case "-g": if(i + 2 < args.length){ gridSearchSamples = args[i + 1]; gridSearchPercent = args[i + 2]; } else{ System.out.println("Missing Parameter For Option -g"); return; } break; case "-h": help = true; break; case "-s": if(i + 1 < args.length){ solenoid = args[i + 1]; } else{ System.out.println("Missing Parameter For Option -s"); return; } break; case "-t": if(i + 1 < args.length){ toroid = args[i + 1]; } else{ System.out.println("Missing Parameter For Option -t"); return; } break; case "-v": if(i + 2 < args.length){ vertexSamples = args[i + 1]; vertexIterations = args[i + 2]; } else{ System.out.println("Missing Parameter For Option -v"); return; } break; case "-x": if(i + 1 < args.length){ threads = args[i + 1]; } else{ System.out.println("Missing Parameter For Option -x"); return; } break; case "-z": if(i + 2 < args.length){ lowerBound = args[i + 1]; upperBound = args[i + 2]; } else{ System.out.println("Missing Parameter For Option -z"); return; } break; } } //Attempt to use command line parameters to set values ////Input File if(input.isEmpty()){ System.out.println("Input File Required"); help = true; } else{ inputFile = new File(input); if(inputFile.exists() && !inputFile.isDirectory()){ reader.open(inputFile); } else{ System.out.println("Input File Not Found"); return; } } ////output File if(output.isEmpty()){ System.out.println("Output File Required"); help = true; } else{ outputFile = new File(output); if(outputFile.exists() && !outputFile.isDirectory()){ outputFile.delete(); } try{ outputFile.createNewFile(); writer.open(outputFile.getAbsolutePath()); } catch(IOException e){ System.out.println("Could Not Create Output File"); return; } } ////Engine if(!engine.isEmpty()){ if(!(engine.equals("DCHB") || engine.equals("DCTB"))){ System.out.println("Invalid Engine Specifified"); help = true; } } ////Grid Search Parameters if(!(gridSearchSamples.isEmpty() || gridSearchPercent.isEmpty())){ try{ samplesGridSearch = Integer.parseInt(gridSearchSamples); percentGridSearch = Float.parseFloat(gridSearchPercent); } catch(NumberFormatException e){ System.out.println("Invalid Number Format For Grid Search Parameters"); help = true; } } ////Solenoid Scale if(!solenoid.isEmpty()){ try{ solVal = Float.parseFloat(solenoid); solVal = -1.0f * solVal;//-1.0 multiplier to correct for sign convention mismatch } catch(NumberFormatException e){ System.out.println("Invalid Number Format For Solenoid"); help = true; } } ////Toroid Scale if(!toroid.isEmpty()){ try{ torVal = Float.parseFloat(toroid); } catch(NumberFormatException e){ System.out.println("Invalid Number Format For Toroid"); help = true; } } ////Vertex Search Parameters if(!(vertexSamples.isEmpty() || vertexIterations.isEmpty())){ try{ samplesVertex = Integer.parseInt(vertexSamples); iterationsVertex = Integer.parseInt(vertexIterations); } catch(NumberFormatException e){ System.out.println("Invalid Number Format For Vertex Search Parameters"); help = true; } } ////Threads if(!threads.isEmpty()){ try{ maxThreads = Integer.parseInt(threads); } catch(NumberFormatException e){ System.out.println("Invalid Number Format For Number Of Threads To Use"); help = true; } } ////Target Bounds if(!(lowerBound.isEmpty() || upperBound.isEmpty())){ try{ zMinGlobal = Float.parseFloat(lowerBound); zMaxGlobal = Float.parseFloat(upperBound); } catch(NumberFormatException e){ System.out.println("Invalid Number Format For Target Bounds"); help = true; } } //Print help message and exit if(help){ printHelp(); return; } //Init Engines MagFieldsEngine magField = new MagFieldsEngine(); magField.init(); DCRBEngine thisEngine = new DCRBEngine(); thisEngine.init(); //Apply magnetic field scaling Swimmer.setMagneticFieldsScales(solVal, torVal, 0.0); //Process data events int count = 0; while(reader.hasEvent()){ DataEvent event = reader.getNextEvent(); if(!event.hasBank("RasterBasedTrkg::RBHits")){ ((HipoDataEvent)event).initDictionary(fact); } thisEngine.processDataEvent(event); writer.writeEvent(event); System.out.println("EVENT " + count + " PROCESSED\r\n"); count++; } //Tidy up before leaving reader.close(); writer.close(); } /** * Print help message to the terminal */ private static void printHelp(){ System.out.println( "FriTracking Command Line Options: \r\n" + " -Required: \r\n" + " -i Input Hipo File \r\n" + " Requires 1 Parameter: \r\n" + " 1 (String): Path to desired input file. \r\n" + " Ex: -i /path/to/my/input/file.hipo \r\n" + " \r\n" + " -o Output Hipo File \r\n" + " Requires 1 Parameter: \r\n" + " 1 (String): Path to the desried ouput file. \r\n" + " Will overwrite file if exists. \r\n" + " Ex: -o /path/to/my/output/file.hipo \r\n" + " \r\n" + " -Optional: \r\n" + " -e Engine To Source Data From \r\n" + " Requires 1 Parameter: \r\n" + " 1 (String): Either \"DCHB\" or \"DCTB\" \r\n" + " Default Behavior: \r\n" + " Use DCTB if available and DCHB if not. \r\n" + " Ex: -e DCHB \r\n" + " \r\n" + " -g Grid Search Parameters \r\n" + " Requires 2 Parameters: \r\n" + " 1 (Int ): The number of samples to take per\r\n" + " momentum dimension around the \r\n" + " region-1 cross. \r\n" + " 2 (Float ): The percent of the nominal value \r\n" + " to search within. \r\n" + " Default Behavior: \r\n" + " 5 Samples within 10% of the nominal value \r\n" + " Ex: -g 7 0.02 \r\n" + " \r\n" + " -h Print This Message \r\n" + " \r\n" + " -s Solenoid Field Scale Factor \r\n" + " Requires 1 Parameter \r\n" + " 1 (Float ): The number by which to scale the \r\n" + " solenoid's magnetic field. Note: \r\n" + " a value of 0.0 will likely cause \r\n" + " failure. \r\n" + " Default Behavior: \r\n" + " Scale by -1.0. \r\n" + " Ex: -s 0.05 \r\n" + " \r\n" + " -t Toroid Field Scale Factor \r\n" + " Requires 1 Parameter \r\n" + " 1 (Float ): The number by which to scale the \r\n" + " toroid's magnetic field. Note: a \r\n" + " value of 0.0 will likely cause \r\n" + " failure. \r\n" + " Default Behavior: \r\n" + " Scale by -1.0. \r\n" + " Ex: -s 0.1 \r\n" + " \r\n" + " -v Vertex Search Parameters \r\n" + " Requires 2 Parameters: \r\n" + " 1 (Int ): The number of samples to take \r\n" + " within the target range to find \r\n" + " the best z value. \r\n" + " 2 (Int ): The number of recursive \r\n" + " iterations to go through about \r\n" + " each local minima. \r\n" + " Default Behavior: \r\n" + " 5 samples with 2 iterations. \r\n" + " Ex: -v 10 2 \r\n" + " \r\n" + " -x Threads To Use \r\n" + " Requires 1 Parameter: \r\n" + " 1 (Int ): The maximum number of threads to \r\n" + " use while searching for minima. \r\n" + " Default Behavior: \r\n" + " Only use 1 thread. \r\n" + " Ex: -x 4 \r\n" + " \r\n" + " -z Target Bounds \r\n" + " Requires 2 Parameters: \r\n" + " 1 (Float ): The lower bound in the \r\n" + " z-direction in which to search \r\n" + " for the interaction vertex. \r\n" + " [Unit = cm] \r\n" + " 2 (Float ): The upper bound in the \r\n" + " z-direction in which to search \r\n" + " for the interaction vertex. \r\n" + " [Unit = cm] \r\n" + " Default Behavior: \r\n" + " Lower Bound = -10.0, Upper Bound = 10.0. \r\n" + " Ex: -z -2.5 5.0 \r\n" + "\r\n"); } /** * Initialize the engine * * @return */ @Override public boolean init() { // Load cuts Constants.Load(); super.setStartTimeOption(); super.LoadTables(); return true; } /** * Generic process data event function * * @param event * @return */ @Override public boolean processDataEvent(DataEvent event){ boolean hasCVTData = event.hasBank("CVTRec::Tracks"); boolean hasHBData = event.hasBank("HitBasedTrkg::HBTracks"); if(hasCVTData && hasHBData){ //processDataEventBoth(event);//Just use HB for now processDataEventHB(event); } else if(hasCVTData){ processDataEventCVT(event); } else if(hasHBData){ processDataEventHB(event); } return true; } /** * Process data event for when HB and CVT data is available * * @param event * @return */ public boolean processDataEventBoth(DataEvent event){ System.out.println("Process for Both CVT and HB Not Yet Implemented"); return true; } /** * Process data event for when only CVT data is available * * @param event * @return */ public boolean processDataEventCVT(DataEvent event){ System.out.println("Process for CVT Not Yet Implemented"); return true; } /** * Process data event for when only HB/TB data is available * * @param event * @return */ public boolean processDataEventHB(DataEvent event) { //Pull info out of TB/HB Banks String sourceTracks; String sourceCrosses; if(engine.isEmpty()){ if(event.hasBank("TimeBasedTrkg::TBTracks")){ event.appendBank(copyBank(event, "TimeBasedTrkg::TBHits", "RasterBasedTrkg::RBHits")); event.appendBank(copyBank(event, "TimeBasedTrkg::TBClusters", "RasterBasedTrkg::RBClusters")); event.appendBank(copyBank(event, "TimeBasedTrkg::TBSegments", "RasterBasedTrkg::RBSegments")); event.appendBank(copyBank(event, "TimeBasedTrkg::TBCrosses", "RasterBasedTrkg::RBCrosses")); sourceTracks = "TimeBasedTrkg::TBTracks"; sourceCrosses = "TimeBasedTrkg::TBCrosses"; } else{ event.appendBank(copyBank(event, "HitBasedTrkg::HBHits", "RasterBasedTrkg::RBHits")); event.appendBank(copyBank(event, "HitBasedTrkg::HBClusters", "RasterBasedTrkg::RBClusters")); event.appendBank(copyBank(event, "HitBasedTrkg::HBSegments", "RasterBasedTrkg::RBSegments")); event.appendBank(copyBank(event, "HitBasedTrkg::HBCrosses", "RasterBasedTrkg::RBCrosses")); sourceTracks = "HitBasedTrkg::HBTracks"; sourceCrosses = "HitBasedTrkg::HBCrosses"; } } else if(engine.equals("DCHB")){ if(event.hasBank("TimeBasedTrkg::TBTracks")){ event.appendBank(copyBank(event, "HitBasedTrkg::HBHits", "RasterBasedTrkg::RBHits")); event.appendBank(copyBank(event, "HitBasedTrkg::HBClusters", "RasterBasedTrkg::RBClusters")); event.appendBank(copyBank(event, "HitBasedTrkg::HBSegments", "RasterBasedTrkg::RBSegments")); event.appendBank(copyBank(event, "HitBasedTrkg::HBCrosses", "RasterBasedTrkg::RBCrosses")); sourceTracks = "HitBasedTrkg::HBTracks"; sourceCrosses = "HitBasedTrkg::HBCrosses"; } else{ return false; } } else if(engine.equals("DCTB")){ if(event.hasBank("TimeBasedTrkg::TBTracks")){ event.appendBank(copyBank(event, "TimeBasedTrkg::TBHits", "RasterBasedTrkg::RBHits")); event.appendBank(copyBank(event, "TimeBasedTrkg::TBClusters", "RasterBasedTrkg::RBClusters")); event.appendBank(copyBank(event, "TimeBasedTrkg::TBSegments", "RasterBasedTrkg::RBSegments")); event.appendBank(copyBank(event, "TimeBasedTrkg::TBCrosses", "RasterBasedTrkg::RBCrosses")); sourceTracks = "TimeBasedTrkg::TBTracks"; sourceCrosses = "TimeBasedTrkg::TBCrosses"; } else{ return false; } } else{ return false; } //Create the RBTracks Bank DataBank rbBank = copyBank(event, sourceTracks, "RasterBasedTrkg::RBTracks"); //Raster variables float rasterX = 0.0f; float rasterY = 0.0f; float rasterUX = 0.05f;//0.5mm uncertainty in either direction float rasterUY = 0.05f;//0.5mm uncertainty in either direction if(event.hasBank("MC::Particle")){ rasterX = event.getBank("MC::Particle").getFloat("vx", 0); rasterY = event.getBank("MC::Particle").getFloat("vy", 0); } float[] rasterInfo = new float[]{rasterX, rasterUX, rasterY, rasterUY}; //Calculate the interaction vertex for each for each track in the event for(int i = 0; i < rbBank.rows(); i++){ float[] trackInfo = new float[7]; trackInfo[0] = event.getBank(sourceTracks).getFloat("t1_x" , i); trackInfo[1] = event.getBank(sourceTracks).getFloat("t1_y" , i); trackInfo[2] = event.getBank(sourceTracks).getFloat("t1_z" , i); trackInfo[3] = event.getBank(sourceTracks).getFloat("t1_px", i); trackInfo[4] = event.getBank(sourceTracks).getFloat("t1_py", i); trackInfo[5] = event.getBank(sourceTracks).getFloat("t1_pz", i); trackInfo[6] = (float)(event.getBank(sourceTracks).getByte("q", i)); //Calculate float[] output = new float[8]; Arrays.fill(output, Float.NaN); float doca = -1.0f; try { doca = interactionVertexGridSearch(samplesGridSearch, rasterInfo, trackInfo, percentGridSearch, output); } catch (InterruptedException | ExecutionException exception) { Logger.getLogger(DCRBEngine.class.getName()).log(Level.SEVERE, null, exception); System.out.println(exception); } //Make sure that the momentum is pointing in the right direction if(output[5] < 0.0){ output[3] = output[3] * -1.0f; output[4] = output[4] * -1.0f; output[5] = output[5] * -1.0f; } //System.out.println("Doca: " + doca); //System.out.println(Arrays.toString(output)); //Set Values rbBank.setFloat("Vtx0_x", i, output[0]); rbBank.setFloat("Vtx0_y", i, output[1]); rbBank.setFloat("Vtx0_z", i, output[2]); rbBank.setFloat( "p0_x", i, output[3]); rbBank.setFloat( "p0_y", i, output[4]); rbBank.setFloat( "p0_z", i, output[5]); rbBank.setFloat( "doca", i, doca); } //Put Tracks in Event event.appendBank(rbBank); return true; } /** * This method performs a three dimensional grid search within the * uncertainty in the track's momentum. * * @param samples the number of samples to ake in each dimension * @param rasterInfo the array {x, uncert x, y, uncert y} * @param trackInfo the array{vx, vy, vz, px, py, pz} * @param uncertainty + or - percent uncertainty in track momentum * @param out the array to fill with the ultimate swim output * result * @return the doca of the ultimate swim output result */ private float interactionVertexGridSearch(int samples, float[] rasterInfo, float[] trackInfo, float uncertainty, float[] out) throws InterruptedException, ExecutionException{ //define bounds float upperBoundPX = trackInfo[3] * (1.0f + uncertainty); float lowerBoundPX = trackInfo[3] * (1.0f - uncertainty); float upperBoundPY = trackInfo[4] * (1.0f + uncertainty); float lowerBoundPY = trackInfo[4] * (1.0f - uncertainty); float upperBoundPZ = trackInfo[5] * (1.0f + uncertainty); float lowerBoundPZ = trackInfo[5] * (1.0f - uncertainty); //choose the max radius of uncertainty to check doca against float rasterErr = Math.max(rasterInfo[1], rasterInfo[3]); //define useful arrays float[][][][] output = new float[samples][samples][samples][8]; ArrayList<ArrayList<ArrayList<Future<Float>>>> doca = new ArrayList(); ArrayList<int[]> localMinIndices = new ArrayList<>(); //Define thread service ExecutorService ex = Executors.newFixedThreadPool(maxThreads); //Find the interaction vertices and docas to sift through for(int i = 0; i < samples; i++){ doca.add(new ArrayList<>()); for(int j = 0; j < samples; j++){ doca.get(i).add(new ArrayList<>()); for(int k = 0; k < samples; k++){ //Set Swimmer params float[] swimParams = new float[]{ trackInfo[0], trackInfo[1], trackInfo[2], lowerBoundPX + i * (upperBoundPX - lowerBoundPX) / (samples - 1), lowerBoundPY + j * (upperBoundPY - lowerBoundPY) / (samples - 1), lowerBoundPZ + k * (upperBoundPZ - lowerBoundPZ) / (samples - 1), trackInfo[6]};
//Get interaction vertices doca.get(i).get(j).add(ex.submit(new ThreadedVertexFinder( iterationsVertex, samplesVertex, zMinGlobal, zMaxGlobal, rasterInfo[0], rasterInfo[2], swimParams, output[i][j][k]))); } } } //Wait for all tasks to finish ex.shutdown(); ex.awaitTermination(10, TimeUnit.DAYS);//Arbitrarily large //Find local mins for(int i = 0; i < samples; i++){ for(int j = 0; j < samples; j++){ for(int k = 0; k < samples; k++){ //check if within rasterErr if(doca.get(i).get(j).get(k).get() < rasterErr){ localMinIndices.add(new int[]{i, j, k}); continue; } //check if unrealistically massive if(doca.get(i).get(j).get(k).get() > Float.MAX_VALUE / 2.0){ continue; } //check px component if(i == 0 && doca.get(i).get(j).get(k).get() > doca.get(i + 1).get(j).get(k).get()){ continue; } else if (i > 0 && i < samples - 1 && (doca.get(i).get(j).get(k).get() > doca.get(i + 1).get(j).get(k).get() || doca.get(i).get(j).get(k).get() > doca.get(i - 1).get(j).get(k).get())){ continue; } else if(i == samples - 1 && doca.get(i).get(j).get(k).get() > doca.get(i - 1).get(j).get(k).get()){ continue; } //check py component if(j == 0 && doca.get(i).get(j).get(k).get() > doca.get(i).get(j + 1).get(k).get()){ continue; } else if (j > 0 && j < samples - 1 && (doca.get(i).get(j).get(k).get() > doca.get(i).get(j + 1).get(k).get() || doca.get(i).get(j).get(k).get() > doca.get(i).get(j - 1).get(k).get())){ continue; } else if(j == samples - 1 && doca.get(i).get(j).get(k).get() > doca.get(i).get(j - 1).get(k).get()){ continue; } //check pz component if(k == 0 && doca.get(i).get(j).get(k).get() > doca.get(i).get(j).get(k + 1).get()){ continue; } else if (k > 0 && k < samples - 1 && (doca.get(i).get(j).get(k).get() > doca.get(i).get(j).get(k + 1).get() || doca.get(i).get(j).get(k).get() > doca.get(i).get(j).get(k - 1).get())){ continue; } else if(k == samples - 1 && doca.get(i).get(j).get(k).get() > doca.get(i).get(j).get(k - 1).get()){ continue; } //add to local min list localMinIndices.add(new int[]{i, j, k}); } } } //Cull values that fail zMinGlobal < z < zMaxGlobal for(int i = 0; i < localMinIndices.size(); i++){ int[] idx = localMinIndices.get(i); if(output[idx[0]][idx[1]][idx[2]][2] < zMinGlobal || output[idx[0]][idx[1]][idx[2]][2] > zMaxGlobal){ localMinIndices.remove(i); i--; } } //Exit int[] i; switch(localMinIndices.size()){ case 0: //Fail gracefully Arrays.fill(out, Float.NaN); return Float.NaN; case 1: //Only one value i = localMinIndices.get(0); System.arraycopy(output[i[0]][i[1]][i[2]], 0, out, 0, 8); return doca.get(i[0]).get(i[1]).get(i[2]).get(); default: //Find value which minimizes change from nominal track float minIdxDiff = Float.MAX_VALUE; float minDoca = Float.MAX_VALUE; boolean inRasterErr = false; int minIndex = 0; for(int[] idx : localMinIndices){ float swimDoca = doca.get(idx[0]).get(idx[1]).get(idx[2]).get(); //System.out.println("idx: " + Arrays.toString(idx) + ", doca: " + swimDoca + ", z: " + output[idx[0]][idx[1]][idx[2]][2]); if(swimDoca < rasterErr){ inRasterErr = true; float idxDiff = (float)Math.sqrt(Math.pow(idx[0] - (float)samples / 2, 2.0) + Math.pow(idx[1] - (float)samples / 2, 2.0) + Math.pow(idx[2] - (float)samples / 2, 2.0)); if(idxDiff < minIdxDiff){ minIdxDiff = idxDiff; minIndex = localMinIndices.indexOf(idx); } } else if(inRasterErr == false){ if(swimDoca < minDoca){ minDoca = swimDoca; minIndex = localMinIndices.indexOf(idx); } } } i = localMinIndices.get(minIndex); //System.out.println("Winner: " + Arrays.toString(i)); System.arraycopy(output[i[0]][i[1]][i[2]], 0, out, 0, 8); return doca.get(i[0]).get(i[1]).get(i[2]).get(); } } /** * This method uses a swimmer to calculate the beam's doca position relative to the raster beam axis. * * @param iterations The maximum number of recursive steps * @param samples The number of sample points to take between zMin and zMax for each step. * @param zMin The lower bound of the area of interest. Should be below the lower bound of the target * @param zMax The upper bound of the area of interest. Should be above the upper bound of the target * @param rasterX The rasterized beam x position * @param rasterY The rasterized beam y postiion * @param swim A Swim class which has been set to the position and momentum of interest * @param out A pointer to a float array which will be filled by this method. Will be set to the Swim output or NaN * @return the doca to the rasterized beam coords; -1.0 if no mimimum was found */ private float findInteractionVertex(int iterations, int samples, float zMin, float zMax, float rasterX, float rasterY, Swim swim, float[] out){ //Define useful arrays float[][] swimOutput = new float[samples][8]; float[] doca = new float[samples]; int[] docaIndex = new int[samples]; int docaLength = 0; int[] localMin = new int[samples]; int localMinLength = 0; //Get swim outputs for(int i = 0; i < samples; i++){ double[] temp = swim.SwimToPlaneLab(zMin + i * (zMax - zMin) / (samples - 1)); if(temp == null){ swimOutput[i] = null; continue; } for(int j = 0; j < 8; j++){ swimOutput[i][j] = (float)temp[j]; } } //Calculate the doca for each sample point for(int i = 0; i < samples; i++){ if(swimOutput[i] == null){ continue; } doca[docaLength] = (float)Math.sqrt(Math.pow(rasterX - swimOutput[i][0], 2.0f) + Math.pow(rasterY - swimOutput[i][1], 2.0f)); docaIndex[docaLength] = i; docaLength++; } //Handle variable docaLengths switch(docaLength){ case 0: //Fail gracefully localMinLength = 0; break; case 1: //Set only point as local minimum localMinLength = 1; localMin[0] = 0; break; default: //Find local minima if(doca[0] < doca[1]){ localMin[localMinLength] = 0; localMinLength++; } for(int i = 1; i < docaLength - 1; i++){ if(doca[i] < doca[i - 1] && doca[i] < doca[i + 1]){ localMin[localMinLength] = i; localMinLength++; } } if(doca[docaLength - 1] < doca[docaLength - 2]){ localMin[localMinLength] = docaLength - 1; localMinLength++; } break; } //Exit? if(iterations == 1){ int index = -1; float smallest = Float.MAX_VALUE; //Find a minimum? if(localMinLength == 0){ Arrays.fill(out, Float.NaN); return Float.MAX_VALUE; } //Find the smallest doca for(int i = 0; i < localMinLength; i++){ if(doca[localMin[i]] < smallest){ index = i; } } System.arraycopy(swimOutput[localMin[index]], 0, out, 0, 8); return doca[localMin[index]]; } //Recursively call this method on each of the local minima float[][] minOut = new float[localMinLength][8]; float[] minDoca = new float[localMinLength]; for(int i = 0; i < localMinLength; i++){ if(docaIndex[localMin[i]] == 0){ float newZMin = zMin - (zMax - zMin) / 2; float newZMax = zMax - (zMax - zMin) / 2; if(newZMin < 2.0 * zMinGlobal - zMaxGlobal){ Arrays.fill(minOut[i], Float.NaN); minDoca[i] = Float.MAX_VALUE; } else{ minDoca[i] = findInteractionVertex(iterations, samples, newZMin, newZMax, rasterX, rasterY, swim, minOut[i]); } } else if(docaIndex[localMin[i]] == samples - 1){ float newZMin = zMin + (zMax - zMin) / 2; float newZMax = zMax + (zMax - zMin) / 2; if(newZMin > 2.0 * zMaxGlobal - zMinGlobal){ Arrays.fill(minOut[i], Float.NaN); minDoca[i] = Float.MAX_VALUE; } else{ minDoca[i] = findInteractionVertex(iterations, samples, newZMin, newZMax, rasterX, rasterY, swim, minOut[i]); } } else{ minDoca[i] = findInteractionVertex(iterations - 1, samples, swimOutput[localMin[i]][2] - (zMax - zMin) / (samples - 1), swimOutput[localMin[i]][2] + (zMax - zMin) / (samples - 1), rasterX, rasterY, swim, minOut[i]); } } //Find the smallest doca int index = -1; float smallest = Float.MAX_VALUE; for(int i = 0; i < localMinLength; i++){ if(minOut[i] != null && minDoca[i] < smallest) { index = i; } } //Exit if(index == -1){ Arrays.fill(out, Float.NaN); return Float.MAX_VALUE; } System.arraycopy(minOut[index], 0, out, 0, 8); return minDoca[index]; } /** * This class is a passthrough class to allow the findInteractionVertex * method to be threaded */ private class ThreadedVertexFinder implements Callable<Float>{ //Copies of method parameters int iterations; int samples; float zMin; float zMax; float rasterX; float rasterY; Swim swim; float[] out; //Constructor public ThreadedVertexFinder(int _iterations, int _samples, float _zMin, float _zMax, float _rasterX, float _rasterY, float[] _swimParams, float[] _out){ iterations = _iterations; samples = _samples; zMin = _zMin; zMax = _zMax; rasterX = _rasterX; rasterY = _rasterY; swim = new Swim(); swim.SetSwimParameters(_swimParams[0], _swimParams[1], _swimParams[2], _swimParams[3], _swimParams[4], _swimParams[5], (int)_swimParams[6]); out = _out; } //Passthrough @Override public Float call(){ return findInteractionVertex(iterations, samples, zMin, zMax, rasterX, rasterY, swim, out); } } /** * Provides a copy mechanism from a pre-existing dataBank to a new one. * Note that this does not natively append the new bank to the old event. * * @param event * @param oldBank * @param newBank */ private DataBank copyBank(DataEvent event, String oldBankName, String newBankName){ DataBank oldBank = event.getBank(oldBankName); if(oldBank == null){ return null; } DataBank newBank = event.createBank(newBankName, oldBank.rows()); for(int i = 0; i < oldBank.rows(); i++){ for(int j = 0; j < oldBank.columns(); j++){ switch(oldBank.getDescriptor().getProperty("type", oldBank.getColumnList()[j])){ case 1://Byte newBank.setByte(oldBank.getColumnList()[j], i, (byte)(oldBank.getByte(oldBank.getColumnList()[j], i))); break; case 2://Short newBank.setShort(oldBank.getColumnList()[j], i, (short)(oldBank.getShort(oldBank.getColumnList()[j], i))); break; case 3://Int newBank.setInt(oldBank.getColumnList()[j], i, (int)(oldBank.getInt(oldBank.getColumnList()[j], i))); break; case 4://Unused break; case 5://Float newBank.setFloat(oldBank.getColumnList()[j], i, (float)(oldBank.getFloat(oldBank.getColumnList()[j], i))); break; case 6://Double newBank.setDouble(oldBank.getColumnList()[j], i, (double)(oldBank.getDouble(oldBank.getColumnList()[j], i))); break; } } } return newBank; }
} \end{lstlisting}