AN INTRODUCTORY MANUAL FOR RUNNING DOLPHOT by Bill Harris (McMaster University) with considerable help from Pat Durrell (YSU) May 2019 DOLPHOT is the umbrella term for a suite of software written by Andrew Dolphin. It's been used extensively in many recent programs for carrying out photometry of stellar populations in star clusters and galaxies. Originally it was built as an extension of an early package called HSTPhot (Dolphin 2000, PASP 112, 1383), which was specifically tuned to do stellar photometry from the HST WFPC2 camera. DOLPHOT can be used on any type of digital image, but it is intended to be most effective for the more recent HST workhorse instruments, namely the ACS and WFC3 cameras. It's an impressive code in many different ways, with loads of built-in decision-making about every aspect of stellar photometry. The following primer is meant to help users who are not familiar with Dolphot to get started. In writing this up I've had much help from Pat Durrell, and also with tips from Andy Dolphin; any errors are my own and if you find any, please let me know. SOURCE CODE AND MANUALS The webpage maintained by Andrew Dolphin (now at Raytheon Corporation) that contains all the necessary pieces to get DOLPHOT running is at http://americano.dolphinsim.com/dolphot/ If you are starting from scratch and need to install your own local version, you'll need to download: -- the PHOT 2.0 base sources (these include the manuals as well) -- the PHOT 2.0 module sources for the camera data that you want to work on (ACS, WFC3, WFPC2, WFIRST) -- the Pixel Area Mask (PAM) for each camera (e.g. UVIS on WFC3), and -- the PSFs for each of the filters you are using. These PSF files are very big, so be aware that they take a long time to download. Be patient. The PSFs with "Anderson cores" might be superior but they are not available for a lot of the filters in either camera. This primer is intended as a quick introduction to using DOLPHOT, and it is aimed especially at measuring data from HST ACS/WFC and WFC3/UVIS. The complete official DOLPHOT manuals written by Dolphin are certainly useful, but they are also a bit cryptic in places. So in this primer, I'm going to step through the essentials about how to get it all going in basic usage, and you can hopefully go on from there. Going beyond the manuals, some help can be obtained from papers in the literature that other users have written up. Three that are good are: Monelli et al. 2010, ApJ 720, 1225 -- has an extensive comparison between DOLPHOT and Allframe (Peter Stetson's code) applied to the same data. Dalcanton et al. 2012, ApJS 200, 18 Williams et al. 2014, ApJS 215, 9 -- these give good summary lists and descriptions of the choices of DOLPHOT parameters that go into their photometry. The Dalcanton+ and Williams+ work specifically applies to crowded star fields (in this case, resolved stars in the M31 disk and in nearby galaxies) but they will be good for other fields too. __________________________________________________________________________ COMPILING AND INSTALLING THE CODES Once you've downloaded the source codes, put them in your top directory (or somewhere else safe). Once you un-tar them, they will create a subdirectory 'dolphot2.0' on your system. You will need to make sure the ACS and/or WFC3 modules are also untar'ed starting from the same place, so that the /acs and /wfc3 directories are put into the right places within /dolphot2.0. Makefile: Look into the Makefile script and change whatever lines might be necessary. As an example, I needed to change the fortran compiler line to "export FC=gfortran", and since I want to run this on ACS and WFC3, I uncommented "export USEACS=1" and "export USEWFC3=1". For larger computers, the CFLAG list can be expanded to produce 64-bit libraries: export CFLAGS= -O3 -Wall -m64 The rest of Makefile probably doesn't need any changing. Then type "make" or "make all" to run Makefile and you're done compiling the code. Makefile automatically also compiles the modules for ACS and WFC3 (presuming you have placed those modules in the appropriate directories before make-ing the primary source code), so you don't have to run those two other codes separately. The executables for all the DOLPHOT functions are to be found in the subdirectory /dolphot2.0/bin. The various HST library PSFs (*.psf) and pixel area maps (*pam.fits) for the cameras go into the "dolphot2.0/acs/data" and "dolphot2.0/wfc3/data" subdirectories. If you do add a module later, or you want to change some other basic settings or parameters, you re-Make the code with make clean make all and then refresh your shell file with 'source .cshrc' or log off and log on again. For example, here is the file structure that I have after installing the packages: The master directory 'dolphot2.0' has the following: acs/ bin/ convertpos.c dolphot_common.h dolphot_lib.o fakelist.c fakeproc.o include/ param/ wfc3/ addstars.c calcsky.c display.c dolphot_defs.h dolphot.pdf fakeproc.c fits_lib.c Makefile synthimg.c wfirst/ apphot.c ccdred/ dolphot.c dolphot_lib.c dolphot.tar fakeproc.h fits_lib.o Makefile~ versions.txt Then its 'acs' subdirectory has the following: acsandersonpsf.c acsdistort_main.c acsfilters.c acsfitdistort.c acsphot.c acspsfdata.h acsundistort.c dolphotACS.pdf acsdistort.c acsdistort.o acsfilters.h acsmakepsf.c acsphot.h acsshowpsf.c acsunmask.c Makefile acsdistort.h acsfakelist.c acsfilters.o acsmask.c acsphot.o acsttpar.c data/ versions.txt And within that, the 'data' subdirectory has the following: F475W.wfc1.psf F606W.wfc1.psf F814W.wfc1.psf filters.dat wfc1_pam.fits wfc_idctab.dat F475W.wfc2.psf F606W.wfc2.psf F814W.wfc2.psf hrc_idctab.dat wfc2_pam.fits Lastly, you need to add the DOLPHOT commands to your linux path, and maybe a convenient alias. e.g. in your .cshrc file (or .bashrc or maybe .bash_profile if using bash shell) add lines like this (below, 'jarrah' is the name of my workstation): set path = ($path /net/jarrah/dolphot2.0/bin) alias dolphot '/net/jarrah/dolphot2.0/bin/dolphot' These will let you run the various DOLPHOT commands on the linux command line from any other directory by just typing the command name. __________________________________________________________________________ SETTING UP THE IMAGES Suppose you have a series of ACS exposures for your target field. These can be either the raw *flt.fits or *flc.fits images as drawn directly from the HST MAST archive. I prefer to start with the *flc images which are CTE-corrected. Suppose also these are in two filters, say F475W and F814W as common examples. From these you want to generate a combined, multidrizzled "reference image", *drc.fits or *drz.fits. This combined image should be the deepest one. For example, often the F814W_drc image will be deeper than the blue F475W_drc image, so use F814W as the master. NOTE: This multidrizzled image should be the full multi-extension version, including the science image, the error array, and the data quality array. I use the 'astrodrizzle' function in a jupyter notebook copied from STScI; see https://www.stsci.edu/scientific-community/software/drizzlepac for a sample version and tutorials. It might be that the HST Archive already has the right *.drc or *.drz images that you need, so you can just download them and go ahead. The reference image won't be used for any final photometry; it just sets the basis for the object detections and the final (x,y) coordinate system. NOTE: The drizzled image can have a different scale than the individual *flc exposures, as set by the "final_scale" parameter in astrodrizzle. That's OK; Dolphot does a good job of automatically converting xy coordinates from one to another, but you need to set a couple of parameters first. See Appendix 1 below. One modification that I have tried (but which I don't really recommend) is to use astrodrizzle to combine ALL the images in ALL the filters to get a composite image that is still a bit deeper than in any single filter. In this case though, the drizzled image will have FILTER = 'MULTIPLE' in the header for the filter keyword. Dolphot doesn't recognize that. So you will have to do some creative cheating: use the iraf/pyraf 'hedit' function to change it, like so for the WFC3 camera as an example: hedit image.fits[0] FILTER "F814W" hedit image.fits[1] FILTER "F814W" hedit image.fits[2] FILTER "F814W" hedit image.fits[3] FILTER "F814W" Notice that each image extension that has the FILTER keyword must be changed to a filter name that Dolphot will recognize. To recap, the reference image serves two purposes: (1) It provides a maximally deep master image on which to find target stars. This long list of candidate stars will then be measured by PSF-fitting on all the individual *flc exposures and then averaged up to get their final magnitudes. (2) It provides a master coordinate scale that can be converted to World Coordinates (RA and Dec). The (x,y) locations of the target stars on each individual exposure will be transformed to the reference-image (x,y) scale. This basic approach resembles Peter Stetson's Allframe code. The two codes give pretty similar results, but at the level of ~0.02 magnitude or so they can differ systematically (see Monelli et al. 2010 for detailed comparisons). Ultimately these differences boil down to fine details about how the photometry is done at a deep level in the codes, and may indicate the ultimate limits on how well the photometric scale can be established. The style of DOLPHOT also resembles Allframe a fair bit in that the work is front-loaded: you need to do a lot of preparatory steps, but then you set the master code running and sit back (or maybe go home and let it run overnight). This is what you'll see in the steps listed below. First, put all the relevant images of your target field (*flc, *drc, or else *flt, *drz) from a single camera into a single subdirectory. Now go through the following preparatory steps (I will use the ACS camera as an example, and to be specific let's suppose that you have images in the F475W and F814W filters): (1) Mask bad pixels and multiply by the pixel area mask: acsmask *.fits NOTE: According to the Dolphot manual, this operation overwrites the files, so if you want to keep the originals for a later purpose, back them up (or just download them again from the HST Archive). There are flags you can put onto this command, but the most common is likely "keepcr". However, if you are starting from the *flc Archive images, CR correction has been done and you don't need the "keepcr" parameter. See the DOLPHOT ACS or WFC3 manual. Note that you are running this command on all images, both the science *.flc images *and* the *drc reference image. (2) The ACS or WFC3 camera has two CCD chips, and both are embedded in the multi-extension *flc files. Split the *flc image into two separate images, one for each chip: splitgroups *.fits Again, this command should be run on all images, including the reference image. It outputs two new images for each exposure, named *.chip1.fits and *.chip2.fits. The only exception is that the reference image will just be written into a single file with the suffix .chip1.fits — there will be no .chip2.fits reference image. (3) Calculate the sky level for all the images. While in most cases DOLPHOT will be used to estimate the sky from the individual images, this step to derive a sky-map for each image is still required by DOLPHOT. And yes, even for the reference image! You probably have a long-ish series of raw images to do this for, and it would be tedious to type in the "calcsky" command individually for each one. So you can instead create a linux script like this short example (let's suppose we have 3 individual exposures in each of the two filters): calcsky jdkb12010_drc.chip1 15 35 -128 2.25 2.00 <-- (reference-image name) calcsky jdkb12t2q_flc.chip1 15 35 -128 2.25 2.00 <-- (first raw image, chip 1) calcsky jdkb12t2q_flc.chip2 15 35 -128 2.25 2.00 <-- (first raw image, chip 2) calcsky jdkb12t4q_flc.chip1 15 35 -128 2.25 2.00 <-- (2nd raw image, chip 1) calcsky jdkb12t4q_flc.chip2 15 35 -128 2.25 2.00 <-- (2nd raw image, chip 2) calcsky jdkb12t8q_flc.chip1 15 35 -128 2.25 2.00 (etc.) calcsky jdkb12t8q_flc.chip2 15 35 -128 2.25 2.00 calcsky jdkb12tcq_flc.chip1 15 35 -128 2.25 2.00 calcsky jdkb12tcq_flc.chip2 15 35 -128 2.25 2.00 calcsky jdkb12tgq_flc.chip1 15 35 -128 2.25 2.00 calcsky jdkb12tgq_flc.chip2 15 35 -128 2.25 2.00 calcsky jdkb12tjq_flc.chip1 15 35 -128 2.25 2.00 calcsky jdkb12tjq_flc.chip2 15 35 -128 2.25 2.00 NOTE: In this script, don't add '.fits' at the end of each image name; it's implicitly assumed. As you can see, the list consists of one reference image and then 6 raw images separated into chip 1 and chip 2. The various names of the images in the list are produced by the 'splitgroups' command above. Notice that the image name prefixes (jdkb12*) are taken straight from the HST Archive; in principle you could rename them all with something simpler. But I tend to leave them as is, because they may avoid later confusion over which image is which. The parameters in each 'calcsky' command in the above example mean the following: 15 35 -- inner and outer radii for sky calculation -128 -- get quick sky map. Good enough, since local sky determination will all be done later anyway for each target object. 2.25 2.00 -- sigma_low, sigma_high. Pixel values falling more than sigma_low below the mean or sigma_high above the mean are rejected iteratively. NOTE: as said above, for the reference image there is no 'chip 2'. It's already been combined into a single image. Each chip has an internal (x,y) coordinate system where x runs from 1--> 4096 and y runs from 1--> 2048. But the reference image is (4096x4096), more or less. Thus when DOLPHOT converts the coordinates of each star into the reference-image scale, the y-values for chip 2 are increased by approximately 2048 to put them onto the reference image scale. That is, in the final output photometry files, ** the (x,y) values for all the chip2 data are properly shifted to the reference image system! ** To run this script just type source ___________________________________________________________________________ SETTING UP AND RUNNING DOLPHOT Set up the master parameter file for DOLPHOT. In the DOLPHOT package there is a file 'dolphot.param' which defines all the parameters that the actual measurementis are going to use. When you open it up you'll see a frighteningly long list. Many of the parameters are rather obscure and are not well explained in the official manual, but fortunately most of them can be left as their defaults. But some of them should be set to values appropriate to your set of data. One property of DOLPHOT is that it reduces the two CCD chips separately. i.e. you can run the code twice, once for all the chip1 files and again for chip2. In that case you'll need to define 'chip1.param' and 'chip2.param'. In the top few lines of 'chip1.param' the image names are defined, like this (continuing with the example given above): Nimg = 6 <-- number of .flc images (integer) img0_file = jdkb12010_drc.chip1 <-- the reference image img1_file = jdkb12t2q_flc.chip1 <-- the first individual exposure img2_file = jdkb12t4q_flc.chip1 (etc.) img3_file = jdkb12t8q_flc.chip1 img4_file = jdkb12tcq_flc.chip1 img5_file = jdkb12tgq_flc.chip1 img6_file = jdkb12tjq_flc.chip1 NOTE: 'img0' is the name of the reference image. Then img1 through img6 are the names of all the raw *flc images for CCD chip 1. Nimg is the number of *flc images, *not* including the reference image. Similarly, here are the same lines for 'chip2.param': Nimg = 6 img0_file = jdkb12010_drc.chip1 img1_file = jdkb12t2q_flc.chip2 img2_file = jdkb12t4q_flc.chip2 img3_file = jdkb12t8q_flc.chip2 img4_file = jdkb12tcq_flc.chip2 img5_file = jdkb12tgq_flc.chip2 img6_file = jdkb12tjq_flc.chip2 NOTE (and very important): for chip 2, 'img0' is STILL called *drc.chip1, because again there is NO 'chip2' for the reference image. However (!): You're not REQUIRED to reduce chip1 and chip2 separately. If the numbers of raw images aren't too big, you can combine them into a single reduction run, with a single parameter file (e.g. 'chip.param'). Then in the example above, the first few lines of the parameter file would be Nimg = 12 img0_file = jdkb12010_drc.chip1 img1_file = jdkb12t2q_flc.chip1 img2_file = jdkb12t4q_flc.chip1 img3_file = jdkb12t8q_flc.chip1 img4_file = jdkb12tcq_flc.chip1 img5_file = jdkb12tgq_flc.chip1 img6_file = jdkb12tjq_flc.chip1 img7_file = jdkb12t2q_flc.chip2 img8_file = jdkb12t4q_flc.chip2 img9_file = jdkb12t8q_flc.chip2 img10_file = jdkb12tcq_flc.chip2 img11_file = jdkb12tgq_flc.chip2 img12_file = jdkb12tjq_flc.chip2 In this format the code will take longer to run, but I prefer to use it this way whenever possible. (If you're running it overnight anyway, who cares whether it takes 2 hours or 7 hours?) CHOOSING PARAMETERS Now it's time to make some choices for the more relevant parameters governing the details of the photometry. You can find helpful lists of these e.g. in the papers by Dalcanton et al. and Williams et al. listed above. Examples of choices for some of the critical ones that appear at various places in the file, for the ACS and WFC3 cameras are: img_RAper = 8 <-- FitSky and RAper are linked. See description below. FitSky = 2 (FitSky can be 0, 1, 2, 3, or 4. See below.) img_RSky = 15 25 img_RSky2 = 11 15 img_RPSF = 15 img_aprad = 10 <-- this is 10 px = 0.5 arcsec for ACS, the standard big radius. img_apsky = 20 25 <-- sky annulus inner and outer radii (px) RCentroid = 2 SigFind = 3.0 SigFindMult = 0.85 SigFinal = 3.5 NegSky = 1 <-- Important for HST images; if mean sky is already subtracted then negative values are possible. dPosMax = 3 RCombine = 1.5 SigPSF = 5.0 PSFStep = 0.25 ApCor = 1 <-- Correction of the magnitudes to 'infinite' aperture will be done The DOLPHOT manual gives descriptions of all the parameters, but many look a bit mysterious. Also, for ACS and WFC3 a bunch of the parameters have been hardwired to fixed values. The ACS and WFC3 Manuals list what these are and also give recommended basic values for the others. The references listed above will also describe some of the other parameters in more detail, as the resulting photometry from DOLPHOT can be sensitive to chosen parameters (eg. FitSky, RAper). FitSky and RAper are linked, in such a way that they will derive the local sky background for each star in the image in a reasonable way, depending mainly on how crowded the image field is. Here are the choices: FitSky = 0 --> use 'calcsky' values as computed in the earlier step to set the local sky background. Not recommended unless the field is very sparsely populated and sky is extremely flat. FitSky = 1 --> set the sky background from an annulus with inner and outer radii given by img_Rsky. The star itself is measured within an aperture of radius img_RAper. This choice is similar to daophot/phot, and is a perfectly good choice for uncrowded fields. If you are using this option, then RAper should be similar to the FWHM of the star image to maximize signal-to-noise, and the sky annulus should be several times larger in radius so that it doesn't include the wings of the star light. FitSky = 2 --> set the sky background by using an annulus with inner and outer radii set by img_RSky2. The inner radius must be (RAper+1) or larger, and the outer radius must be RPSF or less. This option is more useful for crowded fields where it will be hard to define a "sky annulus" in the normal sense of FitSky=1. Instead, it uses a smaller annulus closer to the star, and will contain some light from the wings of the star, but this will be accounted for in the internal fitting steps. For FitSky=2, Dolphin recommends making RAper larger than the FWHM so that it includes some sky. Maybe about 3 x FWHM. See the Manual. FitSky = 3 --> here, a simultaneous solution is performed for the PSF fit and the local sky background at the same location as the star. In theory this should be great for crowded fields, but in practice it's not very robust. FitSky = 4 --> same as FitSky=3 but now the local sky is assumed to follow a gradient (ramp) across the measurement aperture. This is even less robust. The author of the code, Andy Dolphin, recommends using FitSky=2 for just about every case, crowded or not. It's the most versatile option. However, for uncrowded fields FitSky=1 is perfectly good. See the Appendix below for a parameter list that I have used recently. Finally, we are ready to run the master code and then sit back while it grinds away. If you chose to run chip1 and chip2 separately, then: dolphot chip1.dat -pchip1.param > output1.log & dolphot chip2.dat -pchip2.param > output2.log & where 'chip1.param' is the master parameter file and 'chip1.dat' is the name of the output data file (call them what you like). Note there is no space between the flag (-p) and the filename. Dolphot puts out some interesting and maybe-useful output while it's running, so I normally pipe that to a logfile. I also set it running as a background job since it may take a while if it's a populous stellar field and you have lots of images. In the measurement process, DOLPHOT uses PSFs that are taken from the HST Archive library. You don't have to build them for yourself. The other obvious advantages of using these PSFs are that they have huge effective signal-to-noise, and they properly account for the distortion and change of scale of the camera going from center to edge. In addition, DOLPHOT generates magnitudes on the VEGAMAG photometric scale, and it uses updated VEGAMAG filter zeropoints. For ACS these zeropoints are given in /dolphot2.0/acs/data/filters.dat, and conversions to the UBVRI system follow Sirianni et al. 2005, PASP 117, 1049. The zeropoint files often get updated, so the values will likely be different from what is in Sirianni et al. The transformed magnitudes into UBVRI should however be approached with caution (see below). Expected run times: expect that Dolphot will take 2 orders of magnitude longer than daophot/phot or allstar, or 3 orders of magnitude longer than SExtractor. on my linux workstation, just 8 raw images take about 30-60 minutes of run time for *each* chip. Larger sets of images will take proportionally longer. You'll have plenty of time to do lunch, or go home for the day. One special run I did with 238 images took two weeks. If you have a lot to do (see the Williams and Dalcanton papers, e.g.) you might want to look into setting up HPC resources. _________________________________________________________________________ ANALYZE THE OUTPUT DOLPHOT produces a LOT of output and the main output data files (called 'chip1.dat' and 'chip2.dat' in the example above) are big. Each file has one line per star, so obviously it can run into thousands or even millions of lines. But very helpfully, it generates a file called e.g. 'chip1.dat.columns' which tells you what each column of the table gives. For a basic run, you will probably be interested in just the mean results combined over all the raw images, which are given in the first 37 (!) columns. Most important of these will be - x, y coordinates - Object type (= 1 for star; reject any other types for cleaner photometry) - Instrumental VEGAMAG magnitude - Magnitude uncertainty - Chi - Signal-to-Noise - Sharpness - Roundness - Crowding - Object Type and then the same list for the other, redder filter (here ACS_F814W). DOLPHOT has already done the jobs of cross-identifying stars on both filters, doing aperture corrections to "infinite" radius, applying the VEGAMAG zeropoints, and even transforming into UBVRI magnitudes. (Be careful about using the transformed magnitudes, however; the Sirianni prescriptions are not always the best, and for WFC3 they are only approximate. See Harris 2018, AJ 156, 296 for more recent transformations of the standard WFC3 filters into the BVI system.) In more recent times it has become more conventional to use the magnitudes in the natural filter system, without transformation. The same sets of numbers for the individual raw images come next, in columns 38 --> \infty. The results for the individual images might be useful, for example, if you are looking for variable stars. The first thing you'll want to do with the output file is to extract a smaller file including just the columns that you really want. In linux, a command to do that is "awk": for example, awk '{print $3,$4,$11,$16}' chip1.dat > will pick out columns 3, 4, 11, 16 and write those to a new file. Or of course you can write your own routines to suit yourself. Several of the parameters, particularly object type, chi, S/N, sharpness can be used very effectively to weed out nonstellar things or faint junk (and there is LOTS of faint junk in the list; it's the vast majority of what is present). So plot up these diagnostic parameters versus magnitude and don't be afraid to cut ruthlessly by using these parameters. Use artificial-star tests to guide you (see below). Note that these diagnostic parameters appear in the table a number of times, as a combined value of all data, combined values for each filter, and individual values. For example, there is a chi for all measurements, then a chi_F475W and a chi_F814W. Experiment with these values. At some point you might find the 'chip*.dat.info' files useful. They give the final transformations of (x,y) coordinates onto the reference image, and aperture corrections in magnitudes. CULLING THE DATA: - Object type: keep only type = 1 ("bright stars") -- these already go very deep. - S/N: keep only those objects with S/N above a certain value (like 4 or 5) in each filter; this will clean up the data file tremendously, as DOLPHOT will claim measurements of objects with extremely small S/N. Typically S/N = 5 is close to the 50% detection completeness level. - Sharpness is the most useful for star/galaxy separation. It is ideally zero for true stars, negative for more extended objects (like faint small galaxies), and positive for artifacts like cosmic rays. (Users of daophot will realize that the scale is opposite to daophot/sharp.) - CHI is useful as a secondary parameter to clean out some remaining non-stellar objects. - CROWD is an interesting new parameter, not seen in Daophot. This is a measurement of how much brighter, in magnitudes, an object *would* have been had PSF fits not been made to neighboring objects. This will be useful to clean out objects where the photometry is most likely to be suspect. For example, CROWD > 1.0 mag or so might be a bad sign. From here on, you're on your own -- into the analysis stages of whatever you are pursuing. ________________________________________________________________________ ARTIFICIAL-STAR TESTS The last major operation you will likely want to do is to determine internal photometric precision, internal biasses versus magnitude, and detection completeness limits of your photometry. The normal way to do this is to insert ‘fake stars’ (scaled PSFs) into the images and remeasure them. Here I'll assume that you know the basic idea already and go on to how to do this within DOLPHOT. DOLPHOT will generate a big list of fake stars for you, through the key command 'fakelist'. Here is an example: fakelist chip1.dat ACS_F475W ACS_F814W 20.0 30.0 0.0 4.0 -nstar=5000 > chip1_fakein.dat fakelist chip2.dat ACS_F475W ACS_F814W 20.0 30.0 0.0 4.0 -nstar=5000 > chip2_fakein.dat Notice that we are doing this twice, once for each CCD chip, in order to be complete. There will be some cases where the two chips would actually give different results for the artificial-star tests -- for example, where the level of crowding changes a lot across the field of the camera, or where the field is targetted on a big galaxy where there is a substantial gradient of background light across the field. The above examples will generate files 'chip1_fakein.dat' and 'chip2_fakein.dat" of 5000 artificial stars each, covering the magnitude range F475W = 20.0 to 30.0 and the color range (F475W-F814W) = 0.0 to 4.0. Thus in the color-magnitude diagram of F475W against (F475W-F814W) the fake stars will evenly but randomly populate a rectangular area over those ranges. NOTE: The first parameter 'chip1.dat' is actually the output data file from the previous DOLPHOT run. In the example above, 5000 stars are NOT added all at once to the image; they are put in one at a time, so that the intrinsic degree of crowding on the image is not changed. So go ahead and add as many as you like. Larger numbers just take longer to run, but you get better statistics and confidence in the detection limits. What this also means is that you can be generous about the upper and lower magnitude limits -- pick a faint end that will be well below the limits that you're likely to be interested in. Nevertheless, I would be hesitant to go above 5000-10000 stars with 'fakelist'; if you have a long list of input images to run through and you are running on an ordinary workstation, the resulting run time can be many days long. Now, you need to run DOLPHOT again but this time with the fake stars added to the image. First set up a separate DOLPHOT parameter file with a few switches changed. Suppose we call it 'chip1_fake.param' to distinguish it from the 'chip1.param' file we used for the original photometry. The parameters to change are: FakeStars = chip1_fakein.dat <-- the file generated by 'fakelist' FakeOut = chip1_fakeout.dat <-- the output file of fake-star measurements (if you leave this blank, the default output filename will be chip1.dat.fake) FakeMatch = 2.0 RandomFake = 1 <-- add Poisson noise ACSuseCTE = 1 Then run DOLPHOT again with the command dolphot chip1.dat -pchip1_fake.param > output.log & Notice 'chip1.dat' is still listed as the first parameter. This acts as a key for Dolphot to look for the right PSF files from the original run, named chip1.dat.1.psf.fits, etc. Reading the output: Now, unhelpfully the output file does not have the same arrangement of columns as the original photometry file. And the DOLPHOT manual doesn't tell you what the arrangement is. However, it works like this: if we have n_blue F475W images and n_red F814W images, then the first (4 + 2n_blue + 2n_red) columns of *fakeout.dat will have the input values for the coordinates and magnitudes of the fake stars. In our example above, n_blue = n_red = 3. So then - columns 3,4 = input x,y coordinates - columns 6, 8, 10, 12, 14, 16 = input magnitude on each individual image (In fact, the input magnitudes are all the same in each filter, so I don't really know why they are all listed.) The next set of columns starting with column number (5 + 2n_blue + 2n_red) goes back to the normal format defined in the 'chip1.columns' file, giving the measured results of the fake stars. First come the (x,y) measured coordinates; then chi, S/N, sharpness, roundness, orientation, crowd, object type, counts, sky level, count rate and uncertainty, and then the magnitudes. The faintest input stars won't appear in the output list because they were lost in the sky noise. By comparing the input list with the output you can deduce the completeness versus magnitude, as well as internal random uncertainties and biasses of the photometry. As mentioned above, you can now plot up (sharp, chi, round, crowd) versus magnitude for the artificial stars, to define boundaries for rejecting nonstellar objects or other bad data, and then apply them to clean up your list of the real data. ___________________________________________________________________________ APPENDIX 1: What about combining images with different scales, or even images from different cameras? Dolphot will do this! (Within limits.) For example, the reference image (*drz or *drc) might be a different scale in arcsec/px than the individual exposures, or you might be combining exposures in different filters from different cameras. That's OK. The key parameters to set are UseWCS = 2 #use WCS info in alignment (int 0=no, 1=shift/rotate/scale, 2=full) Align = 2 #align images? (int 0=no,1=const,2=lin,3=cube) and that's all you should have to do. As an example, I have run Dolphot on F110W images of a field from WFC3/IR (0.1 arcsec/px) with images in F475X taken from WFC3/UVIS (0.04 arcsec/px) quite successfully. When matching up the images, Dolphot will use the WCS parameters in the image headers to do a preliminary coordinate conversion, and then will use individual stars on the images to fine-tune that into a final (xy) conversion of each individual image to the scale of the reference image. One combination I have NOT been able to run successfully is a set of images in one filter taken with ACS or WFC3, combined with images in another filter taken with the older WFPC2 camera. (To be sure, this is a pretty unusual situation.) WFPC2 has four different CCDs in a 2x2 array and each CCD is oriented differently by +-90 or 180 degrees rotation relative to the others. So transforming them to the scale of the reference image and combining them later is very much more complicated. (If you've worked out a scheme to do this, let me know.) What I've ended up doing in this situation is a hybrid approach: (a) define the deep ACS or WFC3 image as the reference image and run Dolphot on the ACS or WFC3 series of images in their single filter; (c) build a multidrizzled WFPC2 deep image for the other filter and then just do the photometry on that combined WFPC2 image with any code you like (daophot or other); (c) derive the coordinate transformation from the drizzled WFPC2 image to the ACS/WFC3 reference image, and finally match up the two photometry lists. __________________________________________________________________________ APPENDIX 2: Sample 'chip1.param' file Nimg = 6 #number of images (int) # # The following parameters can be specified for individual images (img1_...) # or applied to all images (img_...) img0_file = idrg01030_drc.chip1 #reference image img1_file = idrg01dmq_flc.chip1 #individual exposure 1 img2_file = idrg01dpq_flc.chip1 #individual exposure 2 img3_file = idrg01dtq_flc.chip1 #individual exposure 3 img4_file = idrg01dzq_flc.chip1 #individual exposure 4 img5_file = idrg01e2q_flc.chip1 #individual exposure 5 img6_file = idrg01e6q_flc.chip1 #individual exposure 6 # img_shift = 0 0 #shift relative to reference img_xform = 1 0 0 #scale, distortion, and rotation img_PSFa = 3 0 0 0 0 0 #PSF XX term (flt) img_PSFb = 3 0 0 0 0 0 #PSF YY term (flt) img_PSFc = 0 0 0 0 0 0 #PSF XY term (flt) img_RAper = 8.0 #photometry aperture size (flt) img_RChi = -1 #Aperture for determining centroiding (flt); if <=0 use RAper img_RSky = 15 25 #radii defining sky annulus (flt>=RAper+0.5) img_RSky2 = 11 15 #radii defining sky annulus (for FitSky=2 option) img_RPSF = 15 #PSF size (int>0) img_aprad = 10 #radius for aperture correction img_apsky = 20 25 #sky annulus for aperture correction # # The following photometers affect the finding and measurement of stars photsec = #section: group, chip, (X,Y)0, (X,Y)1 RCentroid = 2 #centroid box size (int>0) SigFind = 3.0 #sigma detection threshold (flt) SigFindMult = 0.85 #Multiple for quick-and-dirty photometry (flt>0) SigFinal = 3.5 #sigma output threshold (flt) MaxIT = 10 #maximum iterations (int>0) FPSF = Lorentz #PSF function (str/Gauss,Lorentz,Lorentz^2,G+L) PSFPhot = 1 #photometry type (int/0=aper,1=psf,2=wtd-psf) PSFPhotIt = 3 #number of iterations in PSF-fitting photometry (int>=0) FitSky = 2 #fit sky? (int/0=no,1=yes,2=small,3=with-phot) SkipSky = 1 #spacing for sky measurement (int>0) SkySig = 2.25 #sigma clipping for sky (flt>=1) NegSky = 1 #allow negative sky values? (0=no,1=yes) NoiseMult = 0.10 #noise multiple in imgadd (flt) FSat = 0.999 #fraction of saturate limit (flt) Zero = 25.0 #zeropoint for 1 DN/s (flt) PosStep = 0.25 #search step for position iterations (flt) dPosMax = 3.0 #maximum single-step in position iterations (flt) RCombine = 1.5 #minimum separation for two stars for cleaning (flt) SigPSF = 5.0 #min S/N for psf parameter fits (flt) PSFStep = 0.25 #stepsize for PSF MinS = 1.0 #minimum FWHM for good star (flt) MaxS = 9.0 #maximum FWHM for good star (flt) MaxE = 0.5 #maximum ellipticity for good star (flt) # # Settings to enable/disable features UseWCS = 2 #use WCS info in alignment (int 0=no, 1=shift/rotate/scale, 2=full) Align = 2 #align images? (int 0=no,1=const,2=lin,3=cube) AlignIter = 1 #number of iterations on alignment? (int>0) AlignTol = 0 #number of pixels to search in preliminary alignment (flt>=0) AlignStep = 1 #stepsize for preliminary alignment search (flt>0) AlignOnly = 0 #exit after alignment Rotate = 1 #allow cross terms in alignment? (int 0=no, 1=yes) SubResRef = 1 #subpixel resolution for reference image (int>0) SecondPass = 1 #second pass finding stars (int 0=no,1=yes) SearchMode = 1 #algorithm for astrometry (0=max SNR/chi, 1=max SNR) Force1 = 0 #force type 1/2 (stars)? (int 0=no,1=yes) EPSF = 1 #allow elliptical PSFs in parameter fits (int 0=no,1=yes) PSFsol = 1 #Analytic PSF solution (int -1=none, 0=con, 1=lin, 2=quad) PSFres = 1 #make PSF residual image? (int 0=no,1=yes) psfstars = #Coordinates of PSF stars psfoff = 0.0 #coordinate offset (PSF system - dolphot system) ApCor = 1 #find/make aperture corrections? (int 0=no,1=yes) SubPixel = 1 #subpixel PSF calculation (int>0) FakeStars = #file with fake star input data FakeOut = #file with fake star output data (default=phot.fake) FakeMatch = 3.0 #maximum separation between input and recovered star (flt>0) FakePSF = 2.0 #assumed PSF FWHM for fake star matching FakeStarPSF = 1 #use PSF residuals in fake star tests? (int 0=no,1=yes) RandomFake = 1 #apply Poisson noise to fake stars? (int 0=no,1=yes) FakePad = 0 #minimum distance of fake star from any chip edge to be used UsePhot = #if defined, use alignment, PSF, and aperture corr from photometry DiagPlotType = #format to generate diagnostic plots (PNG, GIF, PS) xytfile = #position file for warmstart (str) xytpsf = #reference PSF for image subtraction VerboseData = 0 #to write all displayed numbers to a .data file # # Flags for HST modes ForceSameMag = 0 #force same count rate in images with same filter? (int 0=no, 1=yes) FlagMask = 4 #photometry quality flags to reject when combining magnitudes CombineChi = 0 #combined magnitude weights uses chi? (int 0=no, 1=yes) WFPC2useCTE = 1 #apply CTE corrections on WFPC2 data? (int 0=no, 1=yes) ACSuseCTE = 0 #apply CTE corrections on ACS data? (int 0=no, 1=yes) WFC3useCTE = 0 #apply CTE corrections on WFC3 data? (int 0=no, 1=yes) ACSpsfType = 0 #use Anderson PSF cores? (int 0=no, 1=yes) WFC3UVISpsfType = 0 #use Anderson PSF cores? (int 0=no, 1=yes) WFC3IRpsfType = 0 #use Anderson PSF cores? (int 0=no, 1=yes) InterpPSFlib = 1 #interpolate PSF library spatially # # Other flags not recommended for most users #img_ref2img = #high order terms for conversion between image (distortion-corrected #if HST) and reference