## ASTR 4880 Computer Lab Session 4

Log in to your astro1 account, open an xgterm window, change to your iraf directory, whatever it is called, and type cl to start IRAF. At the cl command prompt, type:

show imtype

If it says "imh", quit IRAF and edit the file login.cl and change it to "hhh"; then type cl to start IRAF again.

Create a new data directory with mkdir and change to that new directory. You will probably want to create it on the file system to which there is a data2 link in your iraf directory. Now, copy to this directory all the files that are reached with the command:

copy /mnt/vol01/ndm/CCD3/20091113/* ./

### Introduction to Data Reduction

Data reduction is the process of converting a raw image into a graph of detected photon counts versus wavelength - ready for astrophysical analysis. This process reduces the size of the data file, hence the term.

In outline, the data reduction process is as follows:

1. Edit the headers of your images, as needed. (See Session 3.)
2. Construct a median bias frame (Session 2) and subtract it (Session 2) from all the data frames.
3. Construct a median flat frame.
4. Using the median flat, define the regions of the image containing the spectrum. These regions are called "apertures."
5. Define the regions containing the background light to be subtracted from the spectrum.
6. Within each aperture, sum all the pixels in the same line (that is, at the same wavelength) and subtract the background. This process is called, "extraction."
7. Extract the stellar and the comparison lamp (in our case, Th-Ar) spectra using the same apertures.
8. Normalize the flat to its own median.
9. Divide the extracted stellar spectrum by the extracted, normalized flat. This step is called, "flat fielding."
10. This method for flat fielding spectra is the simplest one among many.
11. Identify selected lines in the comparison lamp spectrum and construct a wavelength scale by least squares fitting. This step is called, "wavelength calibration."
12. Apply the wavelength scale to the stellar spectrum. This step is called, "dispersion correction."
13. Admire the spectrum. You're finished for now. Later steps will come under the heading of "astrophysical analysis."
Steps 1 - 3 were covered in previous lab sessions. Since we are working with a new night of observations, we need to repeat steps 2 and 3 on the new data, skipping step 1 for now. Then, we'll start on steps 4 - 6.

1. Use imcombine to construct the median bias; consult the printed copy of the observing log to identify the bias frames.
2. In the same way, construct the median red flat frame. Subtract the median bias from it and from exposures 16-18.

### Aperture definition

To start, load the necessary iraf package. Type:

imred to load the very general Image Reduction package, then

ech to load the Echelle package.

You'll be presented with a long list of tasks included in this package. The ones whose names start with "ap" involve the aperture definition and extraction processes. Some of the ones whose names start with "s" involve operations with spectra, such as splot.

Warning regarding splot: if you use it in the echelle package, you'll start with a virgin set of task parameters, because we haven't used this package before. (Previously, you set up splot and set its parameters in the onedspec package.) Therefore, you'll need to set its parameters, such as hist, before using it. Although this may seem like an inconvenience, it can be an advantage if you want to use one set of parameters for one aspect of your work, but some other set for another aspect.

We have some choices here, but when getting started I prefer to do things one step at a time, so we'll choose the tasks that carry out individual steps in this process.

• apedit - presents the user with a graph for the purpose of defining the widths of the apertures and the background regions
• aptrace - constructs a polynomial function describing how the center of each aperture progresses through the image
Supposing your median flat is called flatred.fits and the version with the bias subtracted is called flatredb.fits ---

PACKAGE = echelle

input = flatredb.fits List of input images to edit
(apertur= ) Apertures
(referen= ) Reference images

(find = no) Find apertures?
(recente= no) Recenter apertures?
(resize = no) Resize apertures?
(edit = yes) Edit apertures?

(line = INDEF) Dispersion line
(nsum = 10) Number of dispersion lines to sum or median
(width = 50.) Profile centering width
(thresho= 0.) Detection threshold for profile centering

The profile centering width is the width of the order across the base; this number needs to be at least approximately correct.

The task will present you with a graph of a cross-cut across the echelle orders. With w and k zoom in on the range of pixels 1 through 500 (approximately). Place the cursor on the third peak - about pixel 200 - and type, m. You have now marked aperture 1, but the width of the aperture is inappropriate because the default parameters for this task are not designed for our data.

There are several ways to "resize" the aperture. The easiest for me is to put the horizontal crosshair at the y position of the plateau in the profile (see graph below) and type y. In the figure below (For illustration purposes, I used a single flat rather than the median.) the bracket defining the width of the aperture has become wider, and the upper and lower limit values in the yellow bar have increased. Type c to find a new center for the aperture; it may have changed because of the different width.

If we are satisfied with the width of our aperture, we set the regions defining the background that will be subtracted from the spectrum during the extraction phase. With the cursor still on the aperture, type b. You'll be shown an interactive curve fitting plot zoomed in on the aperture you are working on.

The brackets at the bottom indicate the sample range that is being used for the fit, and the dashed horizontal line gives the fitted curve. Several things are wrong with this picture, again because the default task parameters are inappropriate.

• The fitted function is a horizontal line, while the background slopes. To fix this problem, type, :order 2
• The brackets don't match the actual background regions. To fix this, type z with the cursor on each bracket to zap it.

Now, zoom in on the lower part of the graph with w then b with the cursor where you want the bottom of the new window or t with the cursor where you want the top of the new window. When you have set up the fitting, your new plot should look something like the figure below. Set the new sample ranges to capture the minimum parts of the graph. To make a new fit, type f when ready.

Now that we have fixed these problems, we should be able to avoid having to fix them again. When finished with the background definition, type q and repeat the procedure for the next three apertures to the right, starting with m. They will be called numbers 2 - 4. Check the width of the aperture and the appropriateness of the background sample regions as before. The location of the background changes along the spectrum, so you'll need to adjust the sample ranges with each aperture.

When you are satisfied with your four apertures, type q and answer yes (hit Enter) to whether to write the aperture information to the database.

In your current directory, you'll now see a subdirectory called database. It contains two files with names beginning with ap. One takes its name from the image you just edited, and the other is called aplast. It always contains the aperture definition for the last file you edited. Examine either of these files; for now, they are both the same. For example, page database/aplast.

Now we are ready to trace the four apertures whose widths we have defined.

PACKAGE = echelle

input = flatredb.fits List of input images to trace
(apertur= ) Apertures
(referen= ) List of reference images
(find = no) Find apertures?
(recente= no) Recenter apertures?
(resize = no) Resize apertures?
(edit = no) Edit apertures?
(trace = yes) Trace apertures?
(fittrac= yes) Fit the traced points interactively?

(line = INDEF) Starting dispersion line
(nsum = 10) Number of dispersion lines to sum
(step = 10) Tracing step
(nlost = 3) Number of consecutive times profile is lost before qui

(functio= legendre) Trace fitting function
(order = 2) Trace fitting function order
(sample = *) Trace sample regions
(naverag= 1) Trace average or median
(niterat= 0) Trace rejection iterations
(low_rej= 3.) Trace lower rejection sigma
(high_re= 3.) Trace upper rejection sigma
(grow = 0.) Trace rejection growing radius
(mode = ql)

This task, aptrace, also has the aperture editing task built in. However, since we have already edited our apertures, all the parameters related to that should now be set to no. (It is OK to do both the editing and the tracing with aptrace, but I am separating the two steps for pedagogic clarity.)

When ready, type :g to execute the task. Answer yes to all the questions. You'll see another interactive curve fitting graph. This time, the data are estimates of the position of the centers of the orders as a function of location in the image. The fitted polynomial describes the shape of the strip that the spectrum makes within the image. Your graph will look something like this.

Clearly, our polynomial is not a good fit. Its order needs to be increased a lot. In order to see as clearly as possible any difference between the data and the fit, type the graph key j to plot the residuals - the difference between each data point and the fitted curve. The points should now scatter uniformly around the graph, with the fit now a horizontal line through y = 0. If you see systematic waviness of the points relative to the fit, increase the order of the polynomial. You should use the lowest order that gives a good fit. The graph below now shows a good fit.

As usual, type q when you are satisfied with the fit. Answer yes to the question, and you'll see a similar graph for the next aperture. Delete outlying points if needed, but for the flat frame it probably won't be needed. The final q will give you the question about writing to the database; answer yes if satisfied. Your database files will now contain information about the fitted polynomial trace for each aperture.

### Aperture extraction

Here, we'll use apsum. This task adds up the data values in each row of pixels across the aperture, yielding one number for each pixel along the spectrum. The task will produce a new spectrum, so you need a name for an output file. I usually generate it by adding an 'x' before the dot in the file name.

PACKAGE = echelle

input = flatredb.fits List of input images
(output = flatredbx.fits) List of output spectra
(apertur= ) Apertures
(format = echelle) Extracted spectra format
(referen= ) List of aperture reference images
(profile= ) List of aperture profile images

(find = no) Find apertures?
(recente= no) Recenter apertures?
(resize = no) Resize apertures?
(edit = no) Edit apertures?
(trace = no) Trace apertures?
(fittrac= no) Fit the traced points interactively?
(extract= yes) Extract apertures?
(extras = no) Extract sky, sigma, etc.?
(review = yes) Review extractions?

(line = INDEF) Dispersion line
(nsum = 10) Number of dispersion lines to sum or median

(backgro= fit) Background to subtract (none|average|fit)
(weights= variance) Extraction weights (none|variance)
(pfit = fit1d) Profile fitting type (fit1d|fit2d)
(clean = yes) Detect and replace bad pixels?
(skybox = 1) Box car smoothing length for sky
(saturat= INDEF) Saturation level
(gain = 6) Photon gain (photons/data number)
(lsigma = 4.) Lower rejection threshold
(usigma = 4.) Upper rejection threshold
(nsubaps= 1) Number of subapertures per aperture
(mode = ql)

As before, this task has all the preceding steps built in. We've already done them, so we want the parameters related to editing and tracing set to no. The only yes values in the second section of the parameter list are for extract and review.

In the last section of the apsum parameter list, we specify that the value of the function we fitted to the background is to be subtracted at each pixel. Watch out for this one, since the default value of this parameter is none. The values for read noise and gain are needed for the variance weighting option, which is also called optimal extraction. This option must be used if one wants to clean out bad pixels.

Your output file is in IRAF image format, with two files having extensions .hhh and .hhd even if you specified a .fits extension in the output parameter. This is a feature of the version of IRAF currently running on astro1. Plot it with splot. Select aperture 1 to start, then select higher-numbered apertures with the right parenthesis key, ).

Now let's process the spectrum of alpha Cyg, number 17. All the steps are the same, except that the flat frame we just processed will be used as a reference. Therefore, we won't need to do the aperture editing or tracing interactively; we can just run apsum on the stellar spectrum. It might not be a bad idea to recenter the apertures, though. The first two sections of the parameter list for apsum now becomes:

PACKAGE = echelle

input = 17b.fits List of input images
(output = 17bx.fits) List of output spectra
(apertur= ) Apertures
(format = echelle) Extracted spectra format
(referen= last) List of aperture reference images
(profile= ) List of aperture profile images

(find = no) Find apertures?
(recente= yes) Recenter apertures?
(resize = no) Resize apertures?
(edit = no) Edit apertures?
(trace = no) Trace apertures?
(fittrac= no) Fit the traced points interactively?
(extract= yes) Extract apertures?
(extras = no) Extract sky, sigma, etc.?
(review = yes) Review extractions?

The later sections of the list remain the same. All the editing and tracing definition parameters will be taken from the reference image. Execute the task, and then plot the output spectrum with splot.

### Flat fielding

Now that we have extracted a flat lamp and a stellar spectrum, we want to divide the star by the flat field in order to:
• Remove pixel-to-pixel variations in quantum efficiency, if any
• Remove the variation in instrumental response with wavelength, which is responsible for the curvature in our output spectra

Try the following: imarith 17bx.fits / redflatbx.fits 17bxf.fits (substitute your own file names as appropriate) and then plot the result. Because the photon counts in the flat are much larger than those in the star, your resulting spectrum has values much less than one. More importantly, information about the level of photon counts in the original image has been lost.

In order to retain this information at least approximately, we want to divide the flat by its own median to obtain an image with a median value of 1. Dividing the stellar spectrum by this image will leave the median value of the stellar spectrum intact.

Load the generic package and type epar normflat.

PACKAGE = generic