Astronomy Articles For A Two Page Term Paper

Posted on by Faem


By using the inbuilt citation counts from NASA's astrophysics data system (ADS) I derive how many citations refereed articles receive as a function of time since publication. After five years, one paper in a hundred has accumulated 91 or more citations, a figure which rises to 145 citations after 10 years. By adding up the number of citations active researchers have received over the past five years I have estimated their relative impact upon the field both for raw citations and citations weighted by the inverse of the number of authors per paper.

What makes a good paper? No objective measure is ever going to be perfect but being cited in another paper at least indicates that the work has been noticed and is thought to be worth mentioning. Papers with many citations are, in general, likely to be more useful and interesting than those that sink without a trace. This is much the same system as that used successfully by fast internet search engines to score sites so that they can provide a list ordered by usefulness; those sites which many people link to get a high score and appear near the top of the returned list.

Such a system is admittedly far from perfect, because it can be influenced by many factors such as having a large number of friends who cite you, self-citing your own papers excessively or producing a very good paper that concludes an avenue of research in such a way that it doesn't form the basis for a large body of subsequent endeavour. That said, there is undoubtedly a trend; papers with high citation counts tend to be better written, more interesting and useful than those that never get referred to again.

Below I first look at citation classics, the 1000 most-cited astronomy papers according to the ADS. Next I examine the citation counts for refereed astronomy papers published since 1970. From this the number of citations received by 1-in-10, 1-in-100 and 1-in-1000 papers can be obtained, and are shown in table 1. The number of citations received by papers of a specified age is also shown, where the age stated defines the centre of a one year range (so two years ago means papers published between 18 and 30 months ago).

Table 1:

Citation thresholds The number of citations required to cross successive power-of-ten likelihoods for papers published a specified number of years (plus six months) ago. The last four rows show the required number of citations and normalized citations (see text) required per researcher to reach the same likelihoods.

Table 1:

Citation thresholds The number of citations required to cross successive power-of-ten likelihoods for papers published a specified number of years (plus six months) ago. The last four rows show the required number of citations and normalized citations (see text) required per researcher to reach the same likelihoods.

When can a researcher be said to have a high ongoing impact on the field? This question is even harder to answer, but a sequence of well-cited papers would be a good start. In the section “Defining good researchers” I suggest two measures of current impact on the field that are calculated by tallying up a researcher's total citations or normalized total citations over a rolling five-year interval. I calculate these numbers for a random sample of over 5000 astronomers with current publications in order to produce likelihood curves.

Citation classics

Quite how a paper passes into folklore and becomes a “citation classic” is difficult to determine. To enter the ranks of the 1000 most-cited astronomical papers in the ADS archive requires a paper to have obtained 257 citations (as of November 2003, when the archive contained 439 746 papers). The webpage contains the latest list of the top 1000 most cited papers. For the purposes of this study, papers are defined to be those found in the astronomy/planetary ADS archive, published in all refereed journals. Citations counts are those returned by ADS. It should be noted that these citation counts are not complete, with some references omitted because the citing journal is not within the ADS database or because an article is older and has only been scanned at present. Inaccuracies in the citing papers reference list can also lead to missed citations. The ADS does contain complete reference lists for all the major astrophysics journals back to issue 1.

Figure 1 shows the distribution by publication-year of the 1000 most-cited astronomical papers. This peaks around 1985, a year that contributes nearly 50 papers to the total. It takes around a decade for the number of citation classics per year to rise to over 30, a figure that tallies well with the five-year timescale required to reach a maximum citation rate followed by a slow decline (Abt 1981).


Histogram showing the publication year of the 1000 most-cited astronomical papers. As of November 2003 an individual paper required 257 citations to make it on to this plot.


Histogram showing the publication year of the 1000 most-cited astronomical papers. As of November 2003 an individual paper required 257 citations to make it on to this plot.

The oldest of these, Chandrasekhar 1943,Stochastic Problems in Physics and Astronomy, is a true classic, with over 900 citations, while the second oldest, Bondi and Hoyle 1944,On the mechanism of accretion by stars, is likely to drop out of the list soon as it has “only” 284 citations. Papers published this long ago are likely to have many contemporary citations missed as they have not yet been entered into the archive. The most recently published additions to the list are Ahmad 2001— for neutrino measurements — and Freedman 2001— measuring the Hubble constant — with just over 300 citations each.

Many of the most highly cited papers in this list are measurements of fundamental parameters: Kurucz 1979,Model atmospheres for G, F, A, B, and O stars; Anders and Grevesse 1989,Abundances of the elements — meteoritic and solar; Landolt 1992,UBVRI photometric standard stars in the magnitude range 11.5–16.0 around the celestial equator; Savage and Mathis 1979,Observed properties of interstellar dust; and Draine and Lee 1984Optical properties of interstellar graphite and silicate grains. All these are in the top 10 most-cited papers.

How many citations do more normal papers receive? Using the counts in the ADS archive I have calculated the likelihood of a paper achieving a specified number of citations for all papers in the archive published since 1970. I have also calculated these likelihoods for papers of a specified age, plus or minus six months. Nowadays there are roughly 15 000 papers published per year, falling to 10 000 and 8500 ten and twenty years ago. Figure 2 shows these likelihoods for a range of ages as well as the long-term average (bold line) and the required number of citations to cross certain thresholds are given in table 1. One in ten papers published five years ago has now received 26 citations, with one in 100 getting more than 91, and one in 1000 more than 253 citations.


Likelihood of a paper obtaining a specified number of citations for all papers published since 1970 (red line) and those published a specified number of years ago. The numbers of citations required to breach each likelihood decade are given in table 1.


Likelihood of a paper obtaining a specified number of citations for all papers published since 1970 (red line) and those published a specified number of years ago. The numbers of citations required to breach each likelihood decade are given in table 1.

Defining good researchers

Defining a rating system for researchers is a subject fraught with danger as any single proposed scheme is bound to contain inconsistencies. Here I attempt to come up with a scheme that provides a general guideline. An individual high score on either of the metrics should be subject to further examination and should only be treated as an indication — a researcher in the top few percent of both metrics is likely to be far more widely known than one in the bottom half. Other metrics for estimating impact have been examined by Kurtz (2004), who compared astrophysical research at a variety of venerable US institutions. Sánchez and Benn (2004) used citation statistics to measure impact as a function of the country of origin of the first author, concluding that language and other biases favour the large US- and UK-based communities.

The basis of both schemes described here is to consider only papers published in refereed journals within a rolling five-year time interval. This timespan is set to start five-and-a-half years before the current time and finish six months ago (as there are almost no citations in the first six months after publication). The citation score is then calculated by summing the total number of citations for those papers up to the current date using the automatic facility built in to ADS.

This procedure is achieved by going to the ADS web page at Nottingham (, typing a name in the author field and (for April 2004, for example) the dates 11/1998 and 10/2003 as the range of publication dates. The “select references from” field needs to be changed to “All refereed journals” and sorting done by citation count. This should return figures for the number of papers found and the total number of citations for those papers. There will also be an ordered list of the papers, each with its own citation count, although this information is not used here. A second measure, the normalized citation score, is calculated by weighting each paper by the inverse of the number of authors, reducing the impact of large collaborations which often produce a large number of papers. No attempt is made to remove self citations, however gratuitous (Pearce 2000).

The difficult part of this study is obtaining a useful list of names to feed to the algorithm. I have taken all the unique names plus first initial contributing two or more items to the ADS for authors whose surname starts with A, B or C. In addition, I have also disabled synonym matching on author names, which avoids pattern matching on middle initials and phonetic pronunciation matching. This procedure will combine authors with names such as Martin A S and Martin A C together, but such very similar names appear to be rare, at least when both are successful. I have checked that results near the top of the study suffer no detectable contamination.

The final list of 18 346 names is automatically passed to the ADS server which returns citation and normalized citation figures as detailed above. Not all these astronomers have published during the five year period, as the source list of names spans the entire database. In total, 5136 — or roughly 30%— of the authors have published two or more refereed papers in the study period. Figures 3 and 4 show the likelihood of achieving different citation and normalized citation counts respectively, both for all the authors with two or more citations and for the 2467 active researchers who published at least one paper a year over the study period.


Likelihood of an author achieving more than a specified number of citations within a recent five-year window. The lower curve is for all authors publishing two or more papers in the interval, the upper curve is for active researchers with five or more recent papers.


Likelihood of an author achieving more than a specified number of citations within a recent five-year window. The lower curve is for all authors publishing two or more papers in the interval, the upper curve is for active researchers with five or more recent papers.


Likelihood of an author achieving more than a specified number of normalized citations within a recent five-year window. The lower curve is for all authors publishing two or more papers in the interval, the upper curve is for active researchers with five or more recent papers.


Likelihood of an author achieving more than a specified number of normalized citations within a recent five-year window. The lower curve is for all authors publishing two or more papers in the interval, the upper curve is for active researchers with five or more recent papers.

As table 1 lists, 10% of astronomers with two or more refereed papers in the last five years received more than 231 citations, with this figure rising to 382 for active researchers. Similarly, one in 10 publishing astronomers receives more than 40 normalized citations in the same period. A tar file containing the data displayed on figures 3 and 4 is available from the author (


By extracting citation counts from the ADS I have produced a list of the 1000 most-cited astronomical papers and produced likelihoods for papers receiving a certain number of citations as a function of the number of years that have elapsed since publication.

In this short work I have calculated two easily determined measures of success within the astronomical community. Although these numbers should be treated carefully, they do at least provide some indication of the impact of a particular researcher on the field and relative to their contemporaries. The numbers derived here are simply reproducible for any given person and comparisons can be made against the averages for the community as a whole.





, vol. 




J. American Soc. Information Science and Technology



Astron. Nachr.



U. Munari | A. Henden | R. Belligoli | F. Castellani | G. Cherini | G. L. Righetti | A. Vagnozzi

Accurate and densely populated BVRCIC lightcurves of supernovae SN 2011fe in M101, SN 2012aw in M95 and SN 2012cg in NGC 4424 are presented and discussed. The SN 2011fe lightcurves span a total range of 342 days, from 17 days pre- to 325 days post-maximum. The observations of both SN 2012aw and SN 2012cg were stopped by solar conjunction, when the objects were still bright. The lightcurve for SN 2012aw covers 92 days, that of SN 2012cg spans 44 days. Time and brightness of maxima are measured, and from the lightcurve shapes and decline rates the absolute magnitudes are obtained, and the derived distances are compared to that of the parent galaxies. The color evolution and the bolometric lightcurves are evaluated in comparison with those of other well observed supernovae, showing no significant deviations. © 2012 Elsevier B.V. All rights reserved.

C. Destri | H. J. De Vega | N. G. Sanchez

We derive the main physical galaxy properties: mass, halo radius, phase space density and velocity dispersion from a semiclassical gravitational approach in which fermionic WDM is treated quantum mechanically. They turn out to be fully compatible with observations. The Pauli Principle implies for the fermionic DM phase-space density Q(r→)=ρ(r→)/ σ3 (r→) the quantum bound Q(r→)≤K m4 / ℏ3 , where m is the DM particle mass, σ(r→) is the DM velocity dispersion and K is a pure number of order one which we estimate. Cusped profiles from N-body galaxy simulations produce a divergent Q(r) at r=0 violating this quantum bound. The combination of this quantum bound with the behaviour of Q(r) from simulations, the virial theorem and galaxy observational data on Q implies lower bounds on the halo radius and a minimal distance rmin from the centre at which classical galaxy dynamics for DM fermions breaks down. For WDM, rmin turns to be in the parsec scale. For cold dark matter (CDM), rmin is between dozens of kilometers and a few meters, astronomically compatible with zero. For hot dark matter (HDM), rmin is from the kpc to the Mpc. In summary, this quantum bound rules out the presence of galaxy cusps for fermionic WDM, in agreement with astronomical observations, which show that the DM halos are cored. We show that compact dwarf galaxies are natural quantum macroscopic objects supported against gravity by the fermionic WDM quantum pressure (quantum degenerate fermions) with a minimal galaxy mass and minimal velocity dispersion. Quantum mechanical calculations which fulfil the Pauli Principle become necessary to compute galaxy structures at kpc scales and below. Classical N-body simulations are not valid at scales below rmin . We apply the Thomas-Fermi semiclassical approach to fermionic WDM galaxies, we resolve it numerically and find the physical galaxy magnitudes: mass, halo radius, phase-space density, velocity dispersion, fully consistent with observations especially for compact dwarf galaxies. Namely, fermionic WDM treated quantum mechanically, as it must be, reproduces the observed galaxy DM cores and their sizes. The lightest known dwarf galaxy (Willman I) implies a lower bound for the WDM particle mass m > 0.96 keV. These results and the observed galaxies with halo radius ≥30 pc and halo mass ≥4×10 5Mȯ provide further indication that the WDM particle mass m is approximately in the range 1-2 keV. © 2012 Elsevier B.V. All rights reserved.

E. H. Doha | W. M. Abd- Elhameed | Y. H. Youssri

In this paper, we present a new second kind Chebyshev (S2KC) operational matrix of derivatives. With the aid of S2KC, an algorithm is described to obtain numerical solutions of a class of linear and nonlinear Lane-Emden type singular initial value problems (IVPs). The idea of obtaining such solutions is essentially based on reducing the differential equation with its initial conditions to a system of algebraic equations. Two illustrative examples concern relevant physical problems (the Lane-Emden equations of the first and second kind) are discussed to demonstrate the validity and applicability of the suggested algorithm. Numerical results obtained are comparing favorably with the analytical known solutions. © 2013 Elsevier B.V. All rights reserved.

Ealeal Bear | Noam Soker

We discuss the possibility of observing the transient formation event of an accretion disk from the tidal destruction process of an asteroid near a white dwarf (WD). This scenario is commonly proposed as the explanation for dusty disks around WDs. We find that the initial formation phase lasts for about a month and material that ends in a close orbit near the WD forms a gaseous disk rather than a dusty disk. The mass and size of this gaseous accretion disk is very similar to that of Dwarf Novae (DNe) in quiescence. The bolometric luminosity of the event at maximum is estimated to be ∼0.001-0.1 Lȯ . Based on the similarity with DNe we expect that transient outburst events such as discussed here will be observed at wavelengths ranging from visible to the X-ray, and be detected by present and future surveys. © 2012 Elsevier B.V. All rights reserved.

Liton Majumdar | Ankan Das | Sandip K. Chakrabarti | Sonali Chakrabarti

We carry out a quantum chemical calculation to obtain the infrared and electronic absorption spectra of several complex molecules of the interstellar medium (ISM). These molecules are the precursors of adenine, glycine & alanine. They could be produced in the gas phase as well as in the ice phase. We carried out a hydro-chemical simulation to predict the abundances of these species in the gas as well as in the ice phase. Gas and grains are assumed to be interacting through the accretion of various species from the gas phase onto the grain surface and desorption (thermal evaporation and photo-evaporation) from the grain surface to the gas phase. Depending on the physical properties of the cloud, the calculated abundances varies. The influence of ice on vibrational frequencies of different pre-biotic molecules was obtained using Polarizable Continuum Model (PCM) model with the integral equation formalism variant (IEFPCM) as default SCRF method with a dielectric constant of 78.5. Time dependent density functional theory (TDDFT) is used to study the electronic absorption spectrum of complex molecules which are biologically important such as, formamide and precursors of adenine, alanine and glycine. We notice a significant difference between the spectra of the gas and ice phase (water ice). The ice could be mixed instead of simple water ice. We have varied the ice composition to find out the effects of solvent on the spectrum. We expect that our study could set the guidelines for observing the precursor of some bio-molecules in the interstellar space. © 2012 Elsevier B.V. All rights reserved.

Ankan Das | Liton Majumdar | Sandip K. Chakrabarti | Sonali Chakrabarti

Chemical composition of a molecular cloud is highly sensitive to the physical properties of the cloud. In order to obtain the chemical composition around a star forming region, we carry out a two dimensional hydrodynamical simulation of the collapsing phase of a proto-star. A total variation diminishing scheme (TVD) is used to solve the set of equations governing hydrodynamics. This hydrodynamic code is capable of mimicking evolution of the physical properties during the formation of a proto-star. We couple our reasonably large gas-grain chemical network to study the chemical evolution during the collapsing phase of a proto-star. To have a realistic estimate of the abundances of bio-molecules in the interstellar medium, we include the recently calculated rate coefficients for the formation of several interstellar bio-molecules into our gas phase network. Chemical evolution is studied in detail by keeping grain at the constant temperature throughout the simulation as well as by using the temperature variation obtained from the hydrodynamical model. By considering a large gas-grain network with the sophisticated hydrodynamic model more realistic abundances are predicted. We find that the chemical composition are highly sensitive to the dynamic behavior of the collapsing cloud, specifically on the density and temperature distribution. © 2013 Elsevier B.V. All rights reserved.

Yude Bu | Fuqiang Chen | Jingchang Pan

Isometric feature map (Isomap), a nonlinear dimension reduction technique, can preserve both the local and global structure of the data when embed the original data into much lower dimensional space. In this paper we will investigate the performance of Isomap + SVM in classifying the stellar spectral subclasses. We first reduce the dimension of spectra data by PCA and Isomap respectively. Then we apply support vector machine (SVM) to classify the 4 subclasses of K-type spectra from Sloan Digital Sky Survey (SDSS). The experiment result shows that Isomap-based SVM (IS) perform better than PCA-based SVM (PS) with the default γ in SVM, except on the spectra whose SNRs are between 5 and 10 in our experiment. The performance of PS and IS both change in a larger range with the increase of signal-to-noise ratio of the spectra. © 2013 Elsevier Ltd. All rights reserved.

L. H. Deng | B. Li | Y. F. Zheng | X. M. Cheng

Three nonlinear approaches, including the cross-recurrence plot, line of synchronization and cross-wavelet transform, have been proposed to analyze the phase asynchrony between 10.7 cm solar radio flux and sunspot numbers during the period of 1947 February to 2012 June. It is found that, (1) the amplitude variation of the two indicators become more asynchronous around the minimum and maximum of a solar cycle than at the ascending and descending phases of the cycle; (2) the phase relationship between them is not only time-dependent but also frequency-dependent, which may be related to the processes of accumulation and dissipation of solar magnetic energy from the lower to the upper atmosphere. Our findings indicate that bright regions and large sunspot groups are more likely to shed light on solar energy radiation than active regions and small sunspot groups. © 2013 Elsevier B.V. All rights reserved.

Salman Habib | Adrian Pope | Hal Finkel | Nicholas Frontiere | Katrin Heitmann | David Daniel | Patricia Fasel | Vitali Morozov | George Zagaris | Tom Peterka | Venkatram Vishwanath | Zarija Lukić | Saba Sehrish | Wei Keng Liao

© 2015 ElsevierB.V.Allrightsreserved. Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.

Ataru Tanikawa | Kohji Yoshikawa | Keigo Nitadori | Takashi Okamoto

We have developed a numerical software library for collisionless N-body simulations named "Phantom-GRAPE" which highly accelerates force calculations among particles by use of a new SIMD instruction set extension to the x86 architecture, Advanced Vector eXtensions (AVX), an enhanced version of the Streaming SIMD Extensions (SSE). In our library, not only the Newton's forces, but also central forces with an arbitrary shape f(r), which has a finite cutoff radius rcut (i.e. f(r)=0 at r > rcut ), can be quickly computed. In computing such central forces with an arbitrary force shape f(r), we refer to a pre-calculated look-up table. We also present a new scheme to create the look-up table whose binning is optimal to keep good accuracy in computing forces and whose size is small enough to avoid cache misses. Using an Intel Core i7-2600 processor, we measure the performance of our library for both of the Newton's forces and the arbitrarily shaped central forces. In the case of Newton's forces, we achieve 2×10 9 interactions per second with one processor core (or 75 GFLOPS if we count 38 operations per interaction), which is 20 times higher than the performance of an implementation without any explicit use of SIMD instructions, and 2 times than that with the SSE instructions. With four processor cores, we obtain the performance of 8×10 9 interactions per second (or 300 GFLOPS). In the case of the arbitrarily shaped central forces, we can calculate 1×10 9 and 4×10 9 interactions per second with one and four processor cores, respectively. The performance with one processor core is 6 times and 2 times higher than those of the implementations without any use of SIMD instructions and with the SSE instructions. These performances depend only weakly on the number of particles, irrespective of the force shape. It is good contrast with the fact that the performance of force calculations accelerated by graphics processing units (GPUs) depends strongly on the number of particles. Substantially weak dependence of the performance on the number of particles is suitable to collisionless N-body simulations, since these simulations are usually performed with sophisticated N-body solvers such as Tree- and TreePM-methods combined with an individual timestep scheme. We conclude that collisionless N-body simulations accelerated with our library have significant advantage over those accelerated by GPUs, especially on massively parallel environments. © 2012 Elsevier B.V. All rights reserved.

V. M. Velasco Herrera | B. Mendoza | G. Velasco Herrera

Total solar irradiance is the primary energy source of the Earth's climate system and therefore its variations can contribute to natural climate change. This variability is characterized by, among other manifestations, decadal and secular oscillations, which has led to several attempts to estimate future solar activity. Of particular interest now is the fact that the behavior of the solar cycle 23 minimum has shown an activity decline not previously seen in past cycles for which spatial observations exist: this could be signaling the start of a new grand solar minimum. The estimation of solar activity for the next hundred years is one of the current problems in solar physics because the possible occurrence of a future grand solar minimum will probably have an impact on the Earth's climate. In this study, using the PMOD and ACRIM TSI composites, we have attempted to estimate the TSI index from year 1000 AD to 2100 AD based on the Least Squares Support Vector Machines, which is applied here for the first time to estimate a solar index. Using the wavelet transform, we analyzed the behavior of the total solar irradiance time series before and after the solar grand minima. Depending on the composite used, PMOD (or ACRIM), we found a grand minimum for the 21st century, starting in ∼2004 (or 2002) and ending in ∼2075 (or 2063), with an average irradiance of 1365.5 (or 1360.5) Wm -2 ±1σ=0.3 (or 0.9) Wm -2 . Moreover, we calculated an average radiative forcing between the present and the 21st century minima of ∼-0.1 (or -0.2) Wm -2 , with an uncertainty range of -0.04 to -0.14 (or -0.12 to -0.33) Wm -2 . As an indicator of the TSI level, we calculated its annual power anomalies; in particular, future solar cycles from 24 to 29 have lower power anomalies compared to the present, for both models. We also found that the solar activity grand minima periodicity is of 120 years; this periodicity could possibly be one of the principal periodicities of the magnetic solar activity not so previously well recognized. The negative (positive) 120-year phase coincides with the grand minima (maxima) of the 11-year periodicity. © 2014 Elsevier B.V. All rights reserved.

Noam Soker

I find the common envelope (CE) energy formalism, the CE α-prescription, to be inadequate to predict the final orbital separation of the CE evolution in massive envelopes. I find that when the orbital separation decreases to ∼10 times the final orbital separation predicted by the CE α-prescription, the companion has not enough mass in its vicinity to carry away its angular momentum. The core-secondary binary system must get rid of its angular momentum by interacting with mass further out. The binary system interacts gravitationally with a rapidly-rotating flat envelope, in a situation that resembles planet-migration in protoplanetary disks. The envelope convection of the giant carries energy and angular momentum outward. The basic assumption of the CE α-prescription, that the binary system's gravitational energy goes to unbind the envelope, breaks down. Based on that, I claim that merger is a common outcome of the CE evolution of AGB and red super-giants stars with an envelope to secondary mass ratio of M env /M 2 ≳ 5. I discuss some other puzzling observations that might be explained by the migration and merger processes. © 2012 Elsevier B.V. All rights reserved.

E. Yaz Gökçe | S. Bilir | N. D. Öztürkmen | Ş Duran | T. Ak | S. Ak | S. Karaali

We present the first determination of absolute magnitudes for the red clump (RC) stars with the Wide-field Infrared Survey Explorer (WISE). We used recently reduced parallaxes taken from the Hipparcos catalogue and identified 3889 RC stars with the WISE photometry in the Solar neighbourhood. Mode values estimated from the distributions of absolute magnitudes and a colour of the RC stars in WISE photometry are MW1 =-1.635±0.026, MW3 =-1.606±0.024 and (W1-W3) 0 =-0.028±0.001 mag. These values are consistent with those obtained from the transformation formulae using 2MASS data. Distances of the RC stars estimated by using their MW1 and MW3 absolute magnitudes are in agreement with the ones calculated by the spectrophotometric method, as well. These WISE absolute magnitudes can be used in astrophysical researches where distance plays an important role. © 2013 Elsevier B.V. All rights reserved.

G. Renzetti

In this paper, I critically examine the first published results of the LARES mission targeted to measure the relativistic Lense-Thirring drag of the orbit of a satellite around a rotating mass. © 2013 Elsevier B.V. All rights reserved.

Prasun Dutta | Ayesha Begum | Somnath Bharadwaj | Jayaram N. Chengalur

We estimate the H i intensity fluctuation power spectrum for a sample of 18 spiral galaxies chosen from THINGS. Our analysis spans a large range of length-scales from ∼300 pc to ∼16 kpc across the entire galaxy sample. We find that the power spectrum of each galaxy can be well fitted by a power law PHI (U)=A Uα , with an index α that varies from galaxy to galaxy. For some of the galaxies the scale-invariant power-law power spectrum extends to length-scales that are comparable to the size of the galaxy's disk. The distribution of α is strongly peaked with 50% of the values in the range α=-1.9 to 1.5, and a mean and standard deviation of -1.3 and 0.5 respectively. We find no significant correlation between α and the star formation rate, dynamical mass, H i mass or velocity dispersion of the galaxies. Several earlier studies that have measured the power spectrum within our galaxy on length-scales that are considerably smaller than 500 pc have found a power-law power spectrum with α in the range ≈-2.8 to -2.5. We propose a picture where we interpret the values in the range ≈-2.8 to -2.5 as arising from three dimensional (3D) turbulence in the interstellar medium (ISM) on length-scales smaller than the galaxy's scale-height, and we interpret the values in the range ≈-1.9 to -1.5 measured in this paper as arising from two-dimensional ISM turbulence in the plane of the galaxy's disk. It however still remains a difficulty to explain the small galaxy to galaxy variations in the values of α measured here. © © 2012 Elsevier B.V. All rights reserved.

J. Javaraiah

The combined Greenwich and Solar Optical Observing Network (SOON) sunspot group data during 1874-2013 are analysed and studied the relatively long-term variations in the annual sums of the areas of sunspot groups in 0°-10°, 10°-20°, and 20°-30° latitude intervals of the Sun's northern and southern hemispheres. The variations in the corresponding north-south differences are also studied. Long periodicities in these parameters are determined from the fast Fourier transform (FFT), maximum entropy method (MEM), and Morlet wavelet analysis. It is found that in the difference between the sums of the areas of the sunspot groups in 0°-10°latitude intervals of northern and southern hemispheres, there exist ≈9-year periodicity during the high activity period 1940-1980 and ≈12-year periodicity during the low activity period 1890-1939. It is also found that there exists a high correlation (85% from 128 data points) between the sum of the areas of the sunspot groups in 0°-10°latitude interval of the southern hemisphere during a Qth year (middle year of 3-year smoothed time series) and the annual mean International Sunspot Number ( RZ ) of (Q+9)th year. Implication of these results is discussed in the context of solar activity prediction and predicted 50±10 for the amplitude of solar cycle 25, which is about 31% lower than the amplitude of cycle 24. © 2014 Elsevier Inc. All rights reserved.

Evgeny Griv | Chien Cheng Lin | Chow Choong Ngeow | Ing Guey Jiang

The rotation about the Galactic center of open clusters belonging to the thin component of the Milky Way Galaxy is studied on the basis of line-of-sight velocities and positions for 169 nearby objects taken from the literature. The minor second-order effects caused by the Lin-Shu-type density waves are taken into account by using the least-squares numerical method. Even preliminary, the physical interpretation of the results obtained in this manner shows that (i) among several Fourier modes of collective oscillations developing in the solar neighborhood the one-armed m=1 spiral mode is the main one; the Galaxy has thus significant lopsidedness in the stellar distribution at large radii, (ii) the Sun is located between the major trailing spiral-arm segments in Carina-Sagittarius and Perseus, closer to the outer Perseus one, (iii) the local Cygnus-Orion segment is not a part of the dominant spiral arm but is a minor one, which is due to a secondary Fourier harmonic of the Galaxy's oscillations, (iv) the pitch angle of the dominant density-wave pattern in the solar vicinity seems to be relatively small, of the order of 7, and the wavelength (the radial distance between spiral arms) of the m=1 pattern is about 6 kpc, (v) the Galactocentric distance where the velocities of disk rotation and of the spiral density wave (the corotation radius) coincide is located outside of the solar circle; thus, a pattern angular speed lower than the local angular rotation velocity, and finally (vi) the spiral arms of the Galaxy do not represent small deviations of the surface density and gravitational potential from a basic distribution that is axisymmetric in the mean. © 2013 Elsevier B.V. All rights reserved.

Agnieszka Janiuk | M. Bejger | S. Charzyński | P. Sukova

© 2016 Elsevier B.V. Data from the Fermi Gamma-ray Burst Monitor satellite observatory suggested that the recently discovered gravitational wave source, a pair of two coalescing black holes, was related to a gamma-ray burst. The observed high-energy electromagnetic radiation (above 50 keV) originated from a weak transient source and lasted for about 1 s. Its localization is consistent with the direction to GW150914. We speculate about the possible scenario for the formation of a gamma-ray burst accompanied by the gravitational-wave signal. Our model invokes a tight binary system consisting of a massive star and a black hole which leads to the triggering of a collapse of the star's nucleus, the formation of a second black hole, and finally to the binary black hole merger. For the most-likely configuration of the binary spin vectors with respect to the orbital angular momentum in the GW150914 event, the recoil speed (kick velocity) acquired by the final black hole through gravitational wave emission is of the order of a few hundred km/s and this might be sufficient to get it closer to the envelope of surrounding material and capture a small fraction of matter from the remnant of the host star. The gamma-ray burst is produced by the accretion of this remnant matter onto the final black hole. The moderate spin of the final black hole suggests that the gamma-ray burst jet is powered by weak neutrino emission rather than the Blandford–Znajek mechanism, and hence explains the low power available for the observed GRB signal.

Categories: 1

0 Replies to “Astronomy Articles For A Two Page Term Paper”

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *