*==============================================================* * Ex1.do * *==============================================================* **************************************************************** * This do file includes a number of lines that say: * * *STOP HERE AND THINK!! * * When Stata reaches the first of these, it will not * * reconise it as a valid command and will come to a juddering * * halt. At this point, there will be a few questions for you * * to think about. When you have answered them, delete the * * offending line and re-run the do-file. It will crash at the * * next point where you have some questions to think about * **************************************************************** **************************************************************** * In this exercise: * * Summarise the panel data set * * transition tables * * within and between group variations in categorical and * * continuous variables * **************************************************************** version 13 clear all set more off set matsize 800 capture log close local working "P:\working" log using "`working'\ex1.log", replace * change the path * *************************************************************** * Load the panel data and tell Stata it is panel data * *************************************************************** use "`working'\longperson_unbal.dta",replace * Change the path * * convert the string identifier variable to numeric form cap drop id destring xwaveid, gen (id) describe, short xtset id wave ************************************************************** * Questions to think about: * * (1) List the 60 most common patterns of panel * * participation (use xtdes). Is the panel balanced? * * Is it compact? Tabulate the wave variable to show the * * number of observations per wave * * (2) look at key variables: weekly gross pay (wscei), * * hours per week usually worked(jbhruc), and highest * * qualification (edhigh1). Use summarize with the detail * * option for continuous variables and tabulate for * * categorical variables * * (3) Do these variables contain any invalid values? There * * are several different ways to identify the different * * types of invalid cases and their codes: * * - consult the online dictionary: * * www.melbourneinstitute.com/hilda/doc * * - list the value labels * * - tabulate numerical values of invalid cases * ************************************************************** *STOP HERE AND THINK!! * use imputed income * ************************************************************** * Assume we aren't interested in why cases are missing * * convert all invalid codes to missing(=.) * * Warning: Stata also interprets . as +infinity! * ************************************************************** replace jbhruc=. if jbhruc<0 replace edhigh1=. if edhigh1<0 replace wscei=. if wscei<0 summ wscei, de summ jbhruc, de tab edhigh1, miss ************************************************************* * Transitions in categorical variables. * * Our aim is to examine transitions in the categorical vars * * As a simple example, create a binary variable from esdtl * * to indicate part-time work * * Create its lagged values and browse the data to see what * * happened if a wave is missing. * ************************************************************* cap drop pt recode esdtl(2=1) (else=0), gen(pt) ************************************************************* * Examine transitions by using xttrans and tabulate. * * Are there any differences between the two? * ************************************************************* xttrans pt, freq //give absolute frequencies as well sort id wave cap drop lpt gen lpt=L.pt tab lpt pt, row ************************************************************** * Questions to think about: * * (1) Construct the transition table manually by: * * (i) generating the lagged state (call it lpt) * * (ii) tabulating pt against lpt * * (2) If you have created lpt using the lag operator (l.pt), * * then you'll find a different result than that given by * * xttrans * * (3) To understand why, create a lagged variable (call it * * lpt1 this time) in a different way by using the * * previous observation (pt[_n-1]). You'll find that * * this gives the same result as xttrans * * (4) Look at pt, lpt and lpt1 using the browse command. * * (hint: find an individual with a gap in his/her data * * Which method is preferable, xttrans or the tabulation * * of lpt and pt? * * (5) A way to get xttrans to take account of gaps is to use * * the fillin command to create missing values in the gap * * Do this, check the result by browsing the data, and * * try xttrans again. * * * * NOTE: you'll need to sort the data by id wave before * * creating the lagged variables * ************************************************************** *STOP HERE AND THINK!! preserve //store data in original form *********************************************************** * Examine transitions in education. Are the patterns * * plausible? * *********************************************************** cap drop ledhigh1 sort id wave gen ledhigh1=l.edhigh1 label value ledhigh1 edhigh1 xttrans ledhigh1,freq ************************************************************** * Transitions show within variation of categorical variable. * * Now examine within versus between variations using xttab. * * Warning - xttab is obscure! Use Stata help command * * Look at pt work. Compare to sex. Is this what you expect? * ************************************************************** **STOP HERE AND THINK!! help xttab xttab pt xttab sex ************************************************************** * Now look at within vs. between variation for the * * continuous variable wsce using xtsum - use pre-imputed * * value. Compute the within- & between-group shares of * * variation. What do you notice? * * Compare with imputed current weekly gross wages+salary * ************************************************************** **STOP HERE AND THINK!! xtsum wsce display r(sd_w) display r(sd) display r(sd_w)^2/r(sd)^2 //share of within variation display r(sd_b)^2/r(sd)^2 //share of between variation describe wscei // Imputed tab wscei if wscei<0, nol xtsum wscei display r(sd_w)^2/r(sd)^2 //share of within variation display r(sd_b)^2/r(sd)^2 //share of between variation ************************************************************* * Do a graphical cohort analysis of age-earnings profiles * ************************************************************* capture drop year capture drop cohort gen year=2000+wave gen cohort=year-hgage //derive year of birth from age tab cohort ************************************************************* * generate real earnings - deflate by consumer price index * * (ABS CatNo 6401.0 Consumer Price Index, Australia, * * Table 1, All groups, Australia, Series ID A2325846C, * * September quarter). Sept 2015 issue. Base is 2001 wages * ************************************************************* capture drop rwsce gen rwsce=wsce/( 74.7/74.7) if year==2001 replace rwsce=wsce/( 77.1/74.7) if year==2002 replace rwsce=wsce/( 79.1/74.7) if year==2003 replace rwsce=wsce/( 80.9/74.7) if year==2004 replace rwsce=wsce/( 83.4/74.7) if year==2005 replace rwsce=wsce/( 86.7/74.7) if year==2006 replace rwsce=wsce/( 88.3/74.7) if year==2007 replace rwsce=wsce/( 92.7/74.7) if year==2008 replace rwsce=wsce/( 93.8/74.7) if year==2009 replace rwsce=wsce/( 96.5/74.7) if year==2010 replace rwsce=wsce/( 99.8/74.7) if year==2011 replace rwsce=wsce/(101.8/74.7) if year==2012 replace rwsce=wsce/(104.0/74.7) if year==2013 ************************************************************* * generate relative earnings - deflate by average earnings * * index (ABS CatNo 6345.0 Wage Price Index, Australia, * * Table 1, Private & public sectors; all industries, * * Australia, seasonally adjusted, Series ID A2713849C, * * September quarter). Jun 2015 issue. Base is Sep 2001 * ************************************************************* capture drop relwsce gen relwsce=wsce/( 75.7/75.7) if year==2001 replace relwsce=wsce/( 78.2/75.7) if year==2002 replace relwsce=wsce/( 81.0/75.7) if year==2003 replace relwsce=wsce/( 83.9/75.7) if year==2004 replace relwsce=wsce/( 87.4/75.7) if year==2005 replace relwsce=wsce/( 90.9/75.7) if year==2006 replace relwsce=wsce/( 94.7/75.7) if year==2007 replace relwsce=wsce/( 98.7/75.7) if year==2008 replace relwsce=wsce/(101.8/75.7) if year==2009 replace relwsce=wsce/(105.5/75.7) if year==2010 replace relwsce=wsce/(109.4/75.7) if year==2011 replace relwsce=wsce/(113.4/75.7) if year==2012 replace relwsce=wsce/(116.4/75.7) if year==2013 * convert year of birth to 5-year cohort groups for 1941-1985 recode cohort (-999/1940=.) (1941/1945=1) (1946/1950=2) /// (1951/1955=3) (1956/1960=4) (1961/1965=5) (1966/1970=6) /// (1971/1975=7) (1976/1980=8) (1981/1985=9) (1986/9999=.) tab cohort * use the collapse command to replace dataset by one containing * nominal & real earnings averages for age-cohort groups collapse wsce rwsce relwsce, by(cohort hgage) //average earnings within age-cohort cells keep if hgage>=16&hgage<=65 * create cohort-specific earnings variables and label them forvalues c=1/9 { capture drop e`c' gen e`c'=wsce if cohort==`c' } label variable e1 "1941-45" label variable e2 "1946-50" label variable e3 "1951-55" label variable e4 "1956-60" label variable e5 "1961-65" label variable e6 "1966-70" label variable e7 "1971-75" label variable e8 "1976-80" label variable e9 "1981-85" ************************************************************** * Question to think about: * * Use the browse command to examine the effects of the * * collapse command. This is why we used the preserve * * command earlier on, so that we can go back to the pre- * * collapse data using the restore command. Another way * * would have been to save the data so that we could * * read it back again after finishing with the collapsed * * data. * ************************************************************** **STOP HERE AND THINK!! * plot the age-earnings profiles for nominal earnings graph twoway scatter e1-e9 hgage /// , msize(small..) connect(l..) ytitle("earnings") /// yscale(titlegap(1)) xtitle("age") xscale(range(16 65) titlegap(1)) /// legend(rows(3)) title("Nominal earnings") * now export the graph to a Windows MetaFile for inclusion in a Word document graph export "`working'/cohort-age-nomearnings-profiles.wmf", as(wmf) replace * and now do the same for real earnings (deflated by CPI) cap drop reale1/reale9 forvalues c=1/9 { gen reale`c'=rwsce if cohort==`c' } label variable reale1 "1941-45" label variable reale2 "1946-50" label variable reale3 "1951-55" label variable reale4 "1956-60" label variable reale5 "1961-65" label variable reale6 "1966-70" label variable reale7 "1971-75" label variable reale8 "1976-80" label variable reale9 "1981-85" graph twoway scatter reale1-reale9 hgage /// , msize(small..) connect(l..) ytitle("earnings") /// yscale(titlegap(1)) xtitle("age") xscale(range(16 65) titlegap(1)) /// legend(rows(3)) title("Real earnings") graph export "`working'/cohort-age-realearnings-profiles.wmf", as(wmf) replace * and now do the same for relative earnings (deflated by CPI) cap drop rele1/rele9 forvalues c=1/9 { gen rele`c'=relwsce if cohort==`c' } label variable rele1 "1941-45" label variable rele2 "1946-50" label variable rele3 "1951-55" label variable rele4 "1956-60" label variable rele5 "1961-65" label variable rele6 "1966-70" label variable rele7 "1971-75" label variable rele8 "1976-80" label variable rele9 "1981-85" graph twoway scatter rele1-rele9 hgage /// , msize(small..) connect(l..) ytitle("earnings") /// yscale(titlegap(1)) xtitle("age") xscale(range(16 65) titlegap(1)) /// legend(rows(3)) title("Relative earnings") graph export "`working'/cohort-age-relativeearnings-profiles.wmf", as(wmf) replace restore // go back to the pre-collapse data ***************************************************************** * Now we look at examples of the basic panel regression command * * (xtreg) for within-group (fixed effects) and between-group * * modelling, and explore the options for robust standard errors * * and testing of residual serial correlation * ***************************************************************** ****************************************************************** * To start with, let's focus on a very simple model to see how * * the command works, and to look at age, cohort(yr of birth and * * period effects. First, select a sample of employees of working * * age and drop observations with missing value * ****************************************************************** gen year=2000+wave gen cohort=year-hgage //derive year of birth from age replace jbhruc=. if jbhruc<0 * generate real earnings - deflate by consumer price index * (ABS CatNo 6401.0 Consumer Price Index, Australia, * Table 1, All groups, Australia, Series ID A2325846C * September quarter. Base is 2001 wages; * use imputed earnings * capture drop rwscei gen rwscei=wscei/( 74.7/74.7) if year==2001 replace rwscei=wscei/( 77.1/74.7) if year==2002 replace rwscei=wscei/( 79.1/74.7) if year==2003 replace rwscei=wscei/( 80.9/74.7) if year==2004 replace rwscei=wscei/( 83.4/74.7) if year==2005 replace rwscei=wscei/( 86.7/74.7) if year==2006 replace rwscei=wscei/( 88.3/74.7) if year==2007 replace rwscei=wscei/( 92.7/74.7) if year==2008 replace rwscei=wscei/( 93.8/74.7) if year==2009 replace rwscei=wscei/( 96.5/74.7) if year==2010 replace rwscei=wscei/( 99.8/74.7) if year==2011 replace rwscei=wscei/(101.8/74.7) if year==2012 replace rwscei=wscei/(104.0/74.7) if year==2013 gen w_hr=rwscei/jbhruc if esbrd==1 // real hourly wage and salary for all jobs * use xtsum on the hourly wage variable xtsum w_hr if w_hr>0 gen lwage_hr=ln(w_hr) gen age=hgage/10 /* rescale age - rescale by 10 when interpreting coefficient */ gen keeper=1 // variable to indicate which cases to use for modelling replace keeper=0 if esdtl!=1&esdtl!=2 //employed replace keeper=0 if hgage>61 & wave>=1 & wave<=4 & sex==2 //state retirement age for female for 2001/2002-2004/2005 replace keeper=0 if hgage>62 & wave>=5 & wave<=8 & sex==2 //state retirement age for female for 2005/2006-2008/2009 replace keeper=0 if hgage>63 & wave>=9 & wave<=12 & sex==2 //state retirement age for female for 2009/2010-2018/2019 replace keeper=0 if hgage>64 & wave>=13 & wave<=16 & sex==2 //state retirement age for female for 2013/2014-2012/2013 replace keeper=0 if hgage>64 & sex==1 //state retirement age for male ************************************************************** * Questions to think about: * * (1) As always, inspect the data carefully. Use the data * * browser and xtsum commands to check that all * * looks well * * (2) Use the xtreg command to regress lwage_hr on age, * * cohort and year, using the Fixed Effects and Between * * Group estimators (use help xtreg to see which options * * to specify). You'll find that two covariates are * * dropped by Stata in the FE case and one in the BE case * * Why is that? * ************************************************************** *STOP HERE AND THINK!! ******************************************************************* * Now we are going to estimate the FE and between models manually,* * for comparison with the Stata estimates. Recall that the FE * * model can be estimated using deviations from individual means, * * while the between model uses the individual means. Create the * * means and deviations and examine them * ******************************************************************* * individual means & deviations cap drop mlwage_hr devlwage_hr egen mlwage_hr = mean(lwage_hr), by (id) gen devlwage_hr=lwage_hr-mlwage_hr sort id wave ************************************************************** * Questions to think about: * * (1) Browse the variables id, wave, lwage_hr, mlwage_hr & * * devlwage_hr to check that group means have been * * created correctly * * (2) Now create individual-specific means and deviations of * * age, cohort & year as variables mage, mcohort & myear * * and devage, devcohort & devyear. Check them using * * xtsum * ************************************************************** *STOP HERE AND THINK!! cap drop mage mcohort myear devage devcohort devyear foreach v in age cohort year { cap drop m`v' egen m`v' = mean(`v'), by (id) gen dev`v'=`v'-m`v' } ****************************************************************** * Perform an OLS regression of the deviation of log wage on the * * deviations of year, age and cohort. * * Run the regression without a constant (why?) * * Are there any differences compared to the Stata FE estimates * * (check the coefficient and SEs)? * ****************************************************************** regress devlwage_hr /*devage devcohort*/ devyear if keeper==1, nocons * force to estimate age coeff * regress devlwage_hr devage /*devcohort*/ if keeper==1, nocons * Compare with FE * xtreg lwage age /* cohort year*/ if keeper, fe ******************************************************************* * Now do a similar regression using the individual mean. * * By default, Stata does not take account of the fact that some * * individuals contribute more observations than others. To * * replicate this, do a simple (unweighted) OLS regression, using * * one observation per individual * ******************************************************************* by id: gen firstobs=_n==1 //each person's first observation reg mlwage_hr mage mcohort /*myear*/ if firstobs==1&keeper==1 * to compare* xtreg lwage_hr age cohort /*year*/ if keeper==1, be * drop year to force Stata to give a coefficient on age * ********************************************************************* * Now we estimate a more complete model of hourly wage and * * salary: as a function of age, age square, birth cohort, martial * * status, sex, job tenure, possession of degree or further education* * trade union coverage, job contract type and born in Australia * ********************************************************************* ********************************************************************* * Assume we don't care why the values are missing set the negative * * values to missing, drop the missing value and derive binary * * variables for the model * ********************************************************************* * create marriage categories * replace mrcurr=. if mrcurr<0 //change negative values to missing recode mrcurr(1=1) (else=0), gen (married) recode mrcurr(2=1) (else=0), gen (defacto) recode mrcurr(3=1) (else=0), gen (separated) recode mrcurr(4=1) (else=0), gen (divorced) recode mrcurr(5=1) (else=0), gen (widowed) recode mrcurr(6=1) (else=0), gen (single) tab wave * contract of current work * replace jbmcnt=. if jbmcnt<0 recode jbmcnt(1=1) (else=0), gen (fixedterm) recode jbmcnt(2=1) (else=0), gen (casual) recode jbmcnt(3=1) (else=0), gen (permanent) replace fixedterm=1 if jbmcnt==8 /*collapse 14 other into fixed term*/ tab wave * highest education achieved * replace edhigh1 =. if edhigh1<0 recode edhigh1(1/3=1) (else=0), gen (degree) //have a degree or higher qualification recode edhigh1(4/5=1) (else=0), gen (further) //have done cert 3 or 4 after school recode edhigh1(8=1) (else=0), gen (yr12) recode edhigh1(9=1) (else=0), gen (yr11) recode edhigh1(10=1) (else=0), gen (edunk) tab wave * trade union * cap drop tucov *label list ajbmunio replace jbmunio=. if jbmunio<0 recode jbmunio(1=1) (else=0), gen(tucov) * job tenure * * label list ajbempt replace jbempt=. if jbempt<0 summ jbempt,detail ************************************************************** * Question to think about: * * * * Questions on job tenure often generate a few wild * * responses. Check for this by looking at the observations * * on jbempt and hgage. How many implausible jbempt cases are * * there? Decide and implement a rule for dropping * * that are clearly implausible. * ************************************************************** *STOP HERE AND THINK!! * Details on the state of residence * replace hhstate=. if hhstate<0 recode hhstate(1=1) (else=0), gen(nsw) recode hhstate(2=1) (else=0), gen(vic) recode hhstate(3=1) (else=0), gen(qld) recode hhstate(4=1) (else=0), gen(sa) recode hhstate(5=1) (else=0), gen(wa) recode hhstate(6=1) (else=0), gen(tas) recode hhstate(7=1) (else=0), gen(nt) recode hhstate(8=1) (else=0), gen(act) * convert sex to a binary variable * * label list sex replace sex=. if sex<0 recode sex(2=1) (else=0), gen(female) * (age/10) squared will used as a covariate gen agesq=age*age ***************************************************************** * Estimate a FE model and interpret the coefficients. How * * important are the individual effects? Estimate the between * * group model. How do the coefficients compare to the FE model? * * How do you interpret any differences? * * drop year from all the models from now on * ***************************************************************** sort id wave xtreg lwage_hr age agesq married female jbemp degree further /// tucov permanent nsw if keeper==1, fe * examine the means of all the variables in the subsample used for estimation: summ lwage_hr age agesq married female jbemp degree further /// tucov permanent nsw if e(sample) xtreg lwage_hr age agesq married female jbemp degree further /// tucov permanent nsw if keeper==1, be *************************************************************** * Estimate the individual effects u(i), For help with * * prediction options, type help xtreg post-estimation. Perform* * a second step regression to estimate the coeffs associated * * with time invariant characteristics, female and cohort. * * First ignore the different numbers of observations * * contributed by individuals. Second, weight the observations * * accordingly. Hint: see wls option for xtreg, be * *************************************************************** * first need to run FE again - do it "quietly" * quietly xtreg lwage_hr hgage agesq married female jbemp degree /// further tucov permanent nsw if keeper==1, fe predict ui, u xtreg ui cohort female if keeper==1, be xtreg u cohort female if keeper==1, be wls *************************************************************** * We are going to estimate the wage equation as a random * * effects model and compare the results to previous FE and * * and between estimates. * * First, estimate the RE model. How important are the * * individual effects? Are they statistically significant * * (use a Breusch-Pagan test)? * *************************************************************** xtreg lwage_hr age agesq married female jbemp degree further /// tucov permanent nsw if keeper==1, re xttest0 *************************************************************** * Estimate the FE and between models to compare with RE(GLS) * * What do you notice? * *************************************************************** xtreg lwage_hr age agesq cohort married female jbemp degree further /// tucov permanent nsw if keeper==1, fe xtreg lwage_hr age agesq cohort married female jbemp degree further /// tucov permanent nsw if keeper==1, be ***************************************************************** * Questions to think about: * * * * (1) Recall that the GLS estimator uses a weighted average * * between and within variation. The GLS transform * * consists in deviating variables from a fraction of * * their individual means. Variables are transformed as: * * devw(it)=w(it)-theta(i)*meanw(it) * * where theta(i)=1-sqrt[sigmasq_e/sigmasq_e+T(i)sigmasq_u)] * * * * Note theta(i) is bounded between 0 and 1. If theta(i)=1, * * we have the within estimator and if theta=0 we have the * * OLS estimator(equal weight given to within and between * * variation). What are the average theta values used in the * * wage eqn (re-run the random effects regression using the * * theta option)? What do you conclude? * * (2) Would it be ok to use OLS instead of GLS? Why (not)? * ***************************************************************** *STOP HERE AND THINK!! **************************************************************** * Save a new version of the dataset for tomorrow * * This avoids having to re-create variables * **************************************************************** save "`working'\longperson_unbal_2.dta",replace set more on log close exit *************************************************************** * Optional additional Exercise #1 * *************************************************************** *************************************************************** * Calculate between and within variation manually and compare * * to xtsum. * * First calculate the grand mean and individual means, using * * the egen command * *************************************************************** cap drop gmws imws egen gmws=mean(wsce) egen imws=mean(wsce), by(id) //mean for each individual ************************************************************** * Next, calculate the deviations of individual means from * * grand means, and of each wsce observation from the * * individual-specific mean. Also calculate total deviation * ************************************************************** replace gmws=. if wsce==. //don't count missing obs replace imws=. if wsce==. //don't count missing obs cap drop bdevws wdevws tdevws gen bdevws=imws-gmws //between deviation gen wdevws=wsce-imws //within deviation gen tdevws=wsce-gmws //total deviation browse id wave wsce gmws imws bdevws wdevws tdevws **************************************************************** * square and sum the deviations to get between sum of squares, * * within sum of squares and total sum of squares. Calculate * * proportion due to within variation. * **************************************************************** cap drop bssws wssws tssws egen bssws= total(bdevws^2) egen wssws= total(wdevws^2) egen tssws= total(tdevws^2) su bssws wssws tssws cap drop propw propb gen propw=wssws/tssws //proportion of within variation gen propb=bssws/tssws //proportion of between variation su propw propb ******************************************************************* ******************************************************************* * Optional additional exercise # 2 * * highlight the following lines and hit the execute button * ******************************************************************* * Allows for more general distributions of e(it). * * Correcting the standard errors for arbitrary heteroscedasticity * ******************************************************************* xtreg lwage_hr hgage agesq married female jbemp degree further /// tucov permanent nsw if keeper==1, fe robust * testing for no serial correlation of e(it) * External procedure xtserial already installed for you xtserial lwage_hr hgage agesq married female jbemp degree further /// tucov permanent nsw if keeper==1 * Correcting standard errors for arbitrary serial correlation xtreg lwage_hr hgage agesq married /*female*/ jbemp degree further /// tucov permanent nsw if keeper==1, fe robust cluster(id) * Estimate model assuming serial correlation is AR(1) xtregar lwage_hr hgage agesq married female jbemp degree further /// tucov permanent nsw if keeper==1, fe ******************************************************************* ******************************************************************** * Optional additional exercise # 3 * * highlight the following lines and hit the execute button * ******************************************************************** * Manual RE estimation * ******************************************************************** xtreg lwage_hr age agesq cohort married female jbemp degree further /// tucov permanent nsw if keeper==1, re *************************************************************** * We are going to try to replicate the RE estimates manually * * using the GLS transform followed by OLS. Needs a bit of * * fiddly code! There are several steps: * * 1. Create a variable for T(i) * * 2. Calculate theta(i) using the appropriate scalars returned* * by Stata above. * * 3. Calculate the deviations for each variables (including * * constant!) * * 4. Do OLS regression using deviations (but no constant) * * 5. Compare to previous RE estimates * *************************************************************** by id: gen Ti=_N // the number of obs (by person) gen theta=1-sqrt(e(sigma_e)^2/(e(sigma_e)^2+ Ti*e(sigma_u)^2)) foreach x in lwage_hr age agesq cohort married female jbemp degree further /// tucov permanent nsw { capture drop m`x' egen m`x' = mean(`x'), by (id) capture drop dev`x' gen dev`x'=`x'-theta*m`x' //deviation } gen transconst=1-theta //transfomred constant reg devlwage_hr devage devagesq devcohort devmarried devfemale /// devjbemp devdegree devfurther devtucov devpermanent devnsw /// transconst if keeper==1, noconst **************************************************************** * End of additional exercises * ****************************************************************