Public Articles
Results from Fake Single Component Galaxies Tests
and 2 collaborators
The fake galaxy models are generated by the Sersic function in GalSim package based on parameters from an input catalog. The procedures can be summarized into following steps:
We have tested these GalSim models by applying Galfit to the images generated in exactly the same way. The results prove that our models are reliable.
To make sure that the fake galaxies we inject on the images are as realistic as possible, we choose to use the models of COSMOS galaxies from \cite{Mandelbaum_2014}. The catalog is based on Exponential (Exp), De Vaucouleurs (Dev), and single Sersic component (Sersic) fitting of galaxies with \(I_{F814W} \le 23.5\ mag\) on the ACS high-resolution (0.03''/pix) images.
From the full COSMOS catalog, we select appropriate Exp, Dev, and Sersic models according to the following standards:
The MADEXP_DEV is the ratio of MAD (Median absolute deviation) of the Exp and Dev models. MADEXP_DEV smaller than 1.0 indicates that the galaxy is more Exp-like; larger than 1.0 means it is more Dev-like. The cut at low axis ratio and low Sersic index is simply because GalSim sometimes fails to generate such model due to the maximum iterations allowed.
At this point, we only work on single frame images. A group of 22 "clean" images are selected from the visit=1236 COSMOS-UDEEP i-band data for this test. These images are from CCDs that are close to the center of the camera. And, we visually check the images to ensure that the contamination from bright saturated stars is at minimum.
For each run, 50 models are randomly selected from the input catalog, and are injected into these 22 images at random pixel positions. The galaxies and pixel positions are the same for each CCD. The calibration parameters and PSF models are extracted at the exact X-Y locations, and are passed to the funcation that generates the fake galaxy image. Appropriate noise is also added to the models before we put them on the images. We make sure the random image coordinates are not too close to the edge, but do not put special effort into avoiding real objects on the images. The X-Y coordinates of these fake galaxies, along with their ID, are recorded in the header of the images.
After that, the fake-injected images are passed to the pipeline for source detection and photometric measurements. We cross-match the X-Y coordinates of the fake objects with the ones estimated by the pipeline using a 2 pixel maximun separation. For the ones return a multiple-match, we keep the one with the smallest separation (Claire has tried a different approach, which is keep all the matched objects. It has very small impact on the results). Meanwhile, we also keep record of the ones without any matched objects.
To make sure that the input models sample the intrinsic distributions of key parameters of the COSMOS galaxy models, we repeat this process 9 times. The same model can be selected in different runs, but only rarely. In general, we have 420-440 different models for Exp, Dev, and Sersic cases. For each model, the average, median, and standard deviation of important photometric parameters are estimated from all the detections (for most cases >15 out of 22), and are used to compared with the input values. Normally, for each run, 5-7% of the fake objects (22 CCDs x 50 Models = 1100 Fake objects) are without any match within 2 pixels. Most of these cases are due to the faintness of the model and/or proximity to bright objects.
At this point, we focus on comparing the input parameters with the magnitude, size, and shape measured by the CModel method in the pipeline. We do notice that, among all the fake objected injected in each run, 6-8% of them have failed CModel photometry. It is not clear what exactly cause this problem. To make the comparison more related to the photometric measurement itself, we furthur exclude all matched detections with \(nChild > 0\) (normally, >10/22).
Prospectus and Annotated Bibliography
Given the high precision of analysis techniques implemented at the LHC at Cern, there has been increasing opportunity to discover theories beyond the current model of fundamental physics. One such theory for physics “beyond the Standard Model” is known as Supersymmetry and proposes an additional symmetry to be added to space-time, allowing for a family of particles that are an exact duplicate (except for this quantity, labeled R) to those found in the Standard Model. Associated with the extension is a corresponding conservation law in supersymmetric interactions known as ‘R-parity’ \cite{Martin_2010}. R-parity conserving decays have been in high focus since they provide an explanation for the massive amount of dark energy foundr, estimated to be close to 73% \cite{Lahanas_2007}\cite{Garrett_2011}. Since R needs to be conserved, the lightest supersymmetric particle (LSP for short) would not be able to decay to any other particle other than itself and would explain the massive amount of seemingly stable dark matter \cite{Lahanas_2007}. For this reason, there has been a large effort to look for data that resemble R-parity conserving modes, without much attention towards R-parity violating decays.
Over the past two years, I performed an analysis looking for data that resembles a signal that is consistent with a supersymmetric decay. My target process is a supersymmetric top decaying to oppositely charged W-bosons one of which decays to a positive muon and an anti-b quark, and the second decays to a b-quark and negatively charged muon. I chose to look for a particular decay that resembles a standard model interaction with well understood backgrounds because I assumed that the process would behave similarly, except for this additional symmetry which I ignored as part of the analysis. Therefore, I chose a process whose major backgrounds were well modeled using current Monte Carlo methods.
One variable that affects the rate at which this occurs is the mass of the supersymmetric top quark \cite{Dolgov_2006}. Since this quantity has not be measured, the goal of my research was to to calculate a cutoff mass at which I can say there is enough data to prove/disprove the theory at a mass lower than a certain value. I did this by performing a comparison between the expected numbers if the theory were true to the numbers found in the data sample. Since the probability at which the decay happens decreases as the supersymmetric top mass increase, there should be a point at which the numbers flip from being too high to too low. The point at which this flip occurs is the “cutoff” point where I can say below which the data does not have enough events to support the theory.
The data came from the latest set published by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider at CERN in Switzerland. At the core of the design is a superconducting solenoid magnet that is 6m in diameter,13m long, and generates a 4T field which is used to determine the charge of the particle. On the outside of the magnet is an electromagnetic calorimeter (ECAL) which is designed to measure electromagnetic deposits. After the products of the decay have passed through the ECAL, they reach the hadronic calorimeter (HCAL) which absorbs most of the energy left in the collision. The particles that do make it through the HCAL are either neutrinos or muons. The muons are detected and collected in a separate configuration around the magnet composed of a drift tube and cathode-strip detector \cite{AUFFRAY_2002}\cite{Sguazzoni_2008}.
The result of this analysis will provide future researchers with a better sense of possible values for the mass of the supersymmetric top and possibly allow for the creation of more specialized detectors that focus on higher mass regions than the calculated cutoff.
Measurement of ttbar\cite{Aad_2014}
This paper provides a lot of important background information concerning the major backgrounds for the standard model version of my decay. Since it specifically focuses on ttbar production, it discusses many of the issues that I encountered while performing the analsysis. This paper will be useful in both the backbrgound section to provide appropriate info as well as guide me during my analysis of the data.
Stops and neutrino mass hierarchy \cite{Marshall_2014}
This paper outlines the correlation between supersymmetric top decau paramters and the neutrino mass hierarchy. This paper will prove to be useful during the section dedicated towards future research as it shows a way for this analysis to provide guidance towards research in the future.
Previous serches for R-parity violation \cite{Stoye_2010}
This paper discusses previous searches for R-parity violation and will be used in my background section to discuss the current state of ressearch in the field.
A general summary of susy searches at LHC \cite{Paige_1999}
Whereas the previous paper discusses more current efforts to search for SUSY, this paper provides an outlook on the field when it was first being discuessed. By looking at the original papers, the original reasons for pursuing particular lines of reseaerch becomes much more clear. This paper will be referenced in my introduction for a description of the field before we knew where to look.
Dark matter considering MSSM \cite{Trotta_2007}
This paper will also go in my introduction and provides a link between dark matter and the MSSM model. By providing this sort of information, it is clear that the hole I am filling with my research is not commonly considered due to the attractiveness of other theories, increasing the usefullness of my research.
Dark Matter Primer \cite{Martin_2010}
This paper will be referenced in my introduction and provides a different viewpoint on the Dark Matter problem as it stands in modern astro physics. By highlighting this issue I am able to drive home the importance of my research.
LSP as DM Candidate \cite{Dolgov_2006}
This paper describes the viability of the LSP as a dark matter candidate and will be referenced in my introduction to provide additional background material.
Detector physics at the LHC \cite{AUFFRAY_2002}\cite{Sguazzoni_2008}
These two papers provide a detailed description of the two technologies that are used in the CMS detector. By looking at the details of the construction, I am able to notice possible shortcomings of the experiment and find ways in which it can be improved. This paper will be refernced in my introduction as well as briefly in the section concern data acquisition.
A Gender Problem? In Academia?
Friday, an op-ed piece actually titled “Academic Science Isn’t Sexist” went up on the New York Times blog (a version appeared in the Sunday Review). It was about academic research and the lack of sexism therein. The two editorialists are co-authors on a recently released analysis on the subject (it is beautifully open access, and much of the raw data is available).
The piece and the paper claim sexism has largely waned in academic research, the result of shifts from a previously sexist, male-dominated academy. Further, that any remaining incongruities between male and female enrollment, advancement, and achievement are artifacts and anecdotal. Academic research is completely gender-blind now. Any differences are largely the product of society-at-large and earlier life decisions (like the choice to play with dolls/cute animals versus trucks/destructive robots).
Huh.
The response from the science blogging community and Twittersphere was immediate and is still on-going. Jonathan Eisen responded Halloween night, soon after the piece was posted. His immediate critique was of the acknowledgement of reports of “physical aggression” in the op-ed piece, without ever addressing these in their data or analysis (even the 60+ page research paper is short on coverage). The assumption: they are also anecdotal? So everything is actually fine?
Probably not (<- this article details accounts of sexual misconduct in field work involving biology, anthropology, and other social sciences, disciplines the authors above highlight as largely welcoming and open to women). Emily Willingham provides excellent analysis of the data presented in the paper and in the broader debate at hand. It turns out there are numerous discrepancies and avoided topics of analysis (e.g. salary figures often had statistically significant differences by gender; women more often reported lack of inclusion; more details in her impeccable post).
Likewise, Matthew Francis covered the story, emphasizing the need to actively address these still-existent problems and not ignore them: the importance of even a little explicit encouragement of female students in the face of implicit discouragement (like he sees in his native field of physics) is often all that’s needed. The ever-emphatic PZ Myers rounds out the debate by breaking down the major reasoning and assumptions in the original paper, with characteristic gusto.
So what exactly were the original authors thinking?
A handful of distributed scientists were able to challenge the key arguments of their paper, using their data and citations, in free time over the weekend.
Talk about peer-review.
Seriously though, what were they thinking? I would like to think that this was actually a brilliantly orchestrated publicity stunt to get more attention on this critical issue. After all, who is going to blog/tweet/counter-op-ed “Academic Science is Slightly Less Sexist than when Male Academics could still Smoke in Their Offices”? Because when you look at the data, the background on this issue, and the immediate response from the community, it’s obvious academic research isn’t now some utopian meritocracy brimming with equality. There is still institutional and systemic biases. Whether its gender, race, sexual-preference, or need related, or tied up in the archaic publishing system that is all too easily gamed, we have a long way to go before things can be considered “fair”. What might a fair system even look like?
Gamma Knife Problem
and 3 collaborators
Stereotactic radiosurgery delivers a single high dose of ionizing radiation to a radiographically well-defined, small intracranial 3D brain tumor without delivering any significant fraction of the prescribed dose to the surrounding brain tissue. Three modalities are commonly used in this area; they are the gamma knife unit, heavy charged particle beams, and external high-energy photon beams from linear accelerators.\cite{aoyama2006stereotactic}
The gamma knife unit delivers a single high dose of ionizing radiation emanating from 201 cobalt-60 unit sources through a heavy helmet. All 201 beams simultaneously intersect at the isocenter, resulting in a spherical (approximately) dose distribution at the effective dose levels. Irradiating the isocenter to deliver dose is termed a “shot.” Shots can be represented as different spheres. Four interchangeable outer collimator helmets with beam channel diameters of 4, 8, 14, and 18 mm are available for irradiating different size volumes. For a target volume larger than one shot, multiple shots can be used to cover the entire target. In practice, most target volumes are treated with 1 to 15 shots. The target volume is a bounded, three-dimensional digital image that usually consists of millions of points.
In geometry, circle packing is the study of the arrangement of circles (of equal or varying sizes) on a given surface such that no overlapping occurs and so that all circles touch another. The associated packing density of an arrangement is the proportion of the surface covered by the circles. In two dimensional Euclidean space, the optimal lattice arrangement of identically-sized circles with the highest density is the hexagonal packing arrangement, a result that was proven by Lagrange.\cite{chang2010simple} Generalizations can also be made of higher dimensions – this is called “sphere packing,” which usually deals only with identical spheres. We dealt only with circles for our first model for simplicity.
Optimization of hot dog vendor location for college student convenience
and 3 collaborators
Business site selection has always been high-stakes: the opening of a new business location has extremely large monetary implications. Location can impact margins, response to competition, and effective exploitation of possible market segments \cite{Clarke_2013}\cite{Cliquet_2013}\cite{Ghosh_1983}. It is also well-established that convenience is a significant factor in consumer decisions, especially those regarding food \cite{Bonke_1996}. In a university setting, decisions such as food vendor placement become particularly important, as daily food is the second highest consumer expenditure for college students \cite{Adams_1997}. In the present study, these understandings were incorporated into a decision procedure regarding the position of a hypothetical hot dog vendor on a college campus, in which convenience for students was evaluated using spatial information.
A map was given of a college campus showing the walking paths and dormitories and approximate distances between the intersections (Figure 1). We were asked to answer questions about the location of a hot dog vender:
Where on campus should you set up your stand?
How does your location change if you set up two stands?
Suppose A and C are female dorms and D, E, and F are male dorms. How would your location change if 30 percent of females and 80 percent of males are likely to eat at your stand?
Suppose the path between B and C and the path between E and D go uphill and that it is twice as hard to walk uphill as downhill. How would your choice change?
We propose an algorithm that determines the most convenient location as the position that minimizes the distance between the dormitories and the hot dog vendor location.
Supplemental Material for WASP: allele-specific methods for unbiased discovery of molecular quantitative trait loci
and 1 collaborator
To detect differences in molecular phenotypes from sequencing data it is essential to remove read mapping biases, which are a major source of false positives. The WASP read mapping pipeline accomplishes this task by ensuring that the mapping of each individual read is unbiased.
The formation of filamentary bundles in turbulent molecular clouds
and 1 collaborator
The classical picture of a star-forming filament is a near-equilibrium structure, with collapse dependent on its gravitational criticality. Recent observations have complicated this picture, revealing filaments as a mess of apparently interacting subfilaments, with transsonic internal velocity dispersions and mildly supersonic intra-subfilament dispersions. How structures like this form is unresolved. Here we study the velocity structure of filamentary regions in a simulation of a turbulent molecular cloud. We present two main findings: first, the observed complex velocity features in filaments arise naturally in self gravitating hydrodynamic simulations of turbulent clouds without the need for magnetic or other effects. Second, a region that is filamentary only in projection and is in fact made of spatially distinct features can displays these same velocity characteristics. The fact that these disjoint structures can masquerade as coherent filaments in both projection and velocity diagnostics highlights the need to continue developing sophisticated filamentary analysis techniques for star formation observations.
Human Typology Sustainability Behaviour
and 1 collaborator
Abstract
This article proposes a typology of four types of individuals who are impeded to engage in sustainable lifestyles.
Social Recommender Systems
Recommender systems play an increasingly important role in the success of social media websites. Higher portions of social websites’ traffic are triggered by recommendations and those sites rely on the quality of the recommendations to attract new users and retain existing ones. In this chapter, we will introduce the notion of social recommender systems as recommender systems that target the social media domain. After a short introduction, we will discuss in detail two of the most prominent types of social recommender systems — recommendation of social media content and recommendation of people. We will describe the main approaches and state-of-the-art techniques for each of the recommendation types. We will also review related work from the recent years that studied such recommender systems, in order to demonstrate the different use cases and methods applied to take advantage of the unique data. We will conclude by summarizing the key aspects, emerging domains, and open challenges for social recommender systems.
Authorea newsletter. Oct 2014
Hi friend,
Now that the academic year is back in full swing, we'd like to share with you some of our latest news. First things first: We are happy to announce that we have raised a round of investment from FF Ventures and the New York Angels! We are solidifying and growing our team which means that Authorea will get better faster. We will keep working toward our mission to accelerate science, to improve dissemination and quality of research results and to promote Open Science.
In terms of technical developments, we have implemented a bunch of new features and bug fixes. We wanted to highlight our sleek brand new commenting interface - try it now! Go ahead, highlight some text and click on the comment popup. It's fast and it will let you discuss your manuscripts with coauthors, reviewers, and the public.
We also (finally!) developed a word count feature which lets you... uhm... count words! But you know what, different journals have different ways of counting words and we got you covered for all of them.
One more feature which will make some of you happy is the Microsoft Word export. Yeah that's right. Whether you are writing a math-heavy LaTeX manuscript or a student review paper, you can now export your document to Word.
As usual, feel free to post your bug reports and feature requests on our feedback page or send us an email at [email protected]
with any questions.
Happy writing,
The Authorea team
(oh, we're hiring!)
Representatietheorie
Since v ≠ 0, clearly there exists a basis element b0 such that b0*(v)≠0. Since b0* is in the dual vector space, we are done.
Motivation to join the SENSE programme
I am passionate about sustainability. I aim to live sufficiently and enjoy the personal qualities that come along with a lifestyle of reduced consumption. Despite the increasing relevancy and debates around green growth and socially fair conditions in society, the majority of the people are hampered or struggle with adopting more sustainable practices in their day to day life. Thereby the key question for me is: How can we better understand the complex dynamics that play out on individual behaviour in the context of a sustainable transition path?
Throughout the first year of my studies, courses such as on Energy and Material Efficiency or Environmental and Material Policy contributed to a better understanding of the technical and institutional side of sustainable development. Aside the classroom, the practical involvement in a number of voluntary initiatives allowed me to explore my interest in grassroots initiatives and the promotion of sustainable lifestyles. They showed me that there is a large transformative potential within people and small communities. Those have achieved a more balanced state within a certain domain, and allow us to learn from their living labs. They are admirable front-runners in that they are willing to fundamentally change their behaviour despite the currently still prevailing systemic incompatibilities, lock-ins and discomforts.
Throughout the last weeks I have been attending conferences on sustainability transitions, degrowth and energy efficiency. I came to the conclusion that in all these research fields, a more comprehensive understanding of particularly sufficiency behaviour on an individual level is crucial, and that a better conceptualisation could be a useful addition to the existing theoretical frameworks. Therefore I want to devote my Master thesis to this research area, prospectively in the scope of a wider research project.
I am excited about the great challenges lying ahead of us that ask for my and everyone else’s contribution, to bridge practical experiences with scientific research. And I am very enthusiastic to work together with other passionate researchers in programmes such as offered by SENSE, on further exploring the dynamical trias of individuals, behaviour and sustainability.
Spatial Implications of Tax and Expenditure Limitations in Colorado
The economic potential of any system is not only driven by the presence of natural and/or developed capital, but also the institutions that govern the exploitation of these valuable assets. In this sense, the term economic potential is misleadingly incomplete. The mechanism that drives resource allocation is a function of both the distribution of purely economic value and the feasible range of activities governed by political institutions. The goal in institutional design for economic growth, therefore, ought to be the facilitation of those activities that increase the marginal product of value extraction efforts. If the objective is the maximization of economic growth, this broad goal is unlikely to draw many detractors. The devil, however, is in the details.
Among the multitude of policy innovations that have been advanced in service of increasing economic growth, institutional reforms that act on the property tax base have materially altered local government finance for more than a century. As early as the 1880s, these reforms were dominated by efforts to target specific populations via circuit breakers and homestead exemptions. \cite{Bowman2008} However, starting in large part with the Tax Revolt of the 1970s, more interest has taken root in implementing reforms that target the base in a general way: tax and expenditure limitations. The impact of these and other measures has been noticeable. While property tax revenue remains fairly buoyant with respect to the economy, it has declined in importance, dropping as a percentage of general revenue from 34% to 27% over the 1977-2002 period. \cite{Edwards2006}
This paper is one component of a larger study seeking to understand the unintended consequences of tax and expenditure limitations. The broad study is a three part empirical examination of the differential impact of tax and expenditure limitations in Colorado (henceforth COTELs) on counties of with different economic foundations. Each section is characterized by exploration of three thematic hypotheses:
The unifying principle across each of these inquiries is the idea that COTELs have constraint levels that vary both cross-sectionally and temporally. This paper explots this variation to explore the extent to which fiscal clustering (measured as fiscal capacity and revenue generation in this context) is driven by this "COTEL intensity" concept.
416492 ORCID iDs and Counting: Uptake by the Astronomical Community
and 7 collaborators
ORCID – an acronym short for Open Researcher and Contributor ID – is an international, interdisciplinary and community-driven effort to create and maintain a registry of persistent, unique identifiers for researchers and scholars. ORCID IDs are extremely important in the disambiguation of non-unique author names. They can also be embedded in key workflows, such as research profile maintenance, manuscript submissions and grant applications. Using several approaches to reach out to our users, we will report on ORCID ID uptake by the astronomical community.
Measurements of Resonance Cones and Cone angle in a Steady State Magnetic Field
and 2 collaborators
Resonance Cones were observed in a plasma affected by a steady state magnetic field and their angular distribution was measured over a set of electric radio frequencies from the time varying oscillation of a short antenna. The angles were compared to the theoretical values under the cold electron approximation and found to match within a few degrees. Resonance cones at smaller angles were also observed, as predicted by theory for warm electron temperatures where $\frac{T_{i}}{T_{e}}\ll1$[3].
Evaluating Southern Ocean cloud biases in ACCESS1.3 using hybrid cloud regimes
Cloud biases in representing current climate, and effects for predicting cloud feedback in warming climate.
Use of cloud regimes to aid model evaluation (identifying compensating biases)
Use of ISCCP-style cloud regimes to for observational studies and model evaluation.
Issues relating to identifying ISCCP-style cloud regimes from models:
Identifying inconsistent cloud regimes in each model.
Applying modelled clouds to observed cloud regimes.
In this study we present a hybrid approach, using both observed and modelled cloud to identify cloud regimes that are common to the model and observations, as well as cloud regimes found only in the models.
We evaluate the ACCESS1.3 GCM; a basic comparison of this model against observations using the ISCCP-simulator is given in Figure [fig:hist_sim-obs]: we note that optically thick low and mid-topped cloud are strongly under-represented, while optically thin cloud is over-represented.
NOTE: significant under-estimate in cloud about! May be the same problem noticed by John Haynes. Shit.
Double click to edit the title
and 1 collaborator
source ee eμ
1 & 50 & 837 & 970
2 & 47 & 877 & 230
3 & 31 & 25 & 415
4 & 35 & 144 & 2356
5 45 300 556
Big Brother is Watching You... To Predict Crashes
and 2 collaborators
Motivation to join the 6^th Sustainable Summer School
I am passionate about sustainability. I aim to live sustainably and enjoy the personal qualities that come along with a sustainable way of life. But also outside of my sustainability club, in the lives of the common people, I notice an increasing relevancy for green growth and socially fair conditions. However, many people are hampered or struggle with adopting more sustainable practices in their day to day life. What are the reasons? And what can I do as a current Master Student of Sustainable Development to help these people successfully overcoming their individual challenges?
Throughout the first year of my studies, courses such as on Energy and Material Efficiency or Environmental and Material Policy contributed to a better understanding of the technical and institutional side of sustainable development. Aside the classroom, the practical involvement in a number of voluntary initiatives allowed me to explore my interest in urban transitions, grassroots initiatives and the promotion of sustainable lifestyles. But they also help me to reflect on who I am: an advanterous young man who sees himself in an experimental playground for the bettering of the life conditions of the people around him.
I have devoted this year’s semester vacations to cycle through France, Germany and Czech Republic to visit sustainability communities. I am fascinated by their straightforward bottom-up approach and want to help sharing the success factors of their developments1.
These encounters showed me the large transformative potential within people and small communities. They have achieved a more balanced state within a certain domain, and allow us to learn from their living labs. They are admirable front-runners in that they are willing to fundamentally change their behaviour despite the currently still prevailing difficulties and discomforts.
I am excited about the great challenge lying ahead of mankind that asks for my and everyone else’s contribution, to bridge practical experiences with scientific research. And I am very enthusiastic to work together with other passionate people in projects such as offered by the 6th Sustainability Summer School, on exploring the dynamical trias of individuals, behaviour and sustainability.
I am sharing my experiences in form of storylines. An example is the community supported agriculture (CSA) initiative in the Southwest of Paris: https://medium.com/susvoice-showcasing-sustainability/community-supported-agriculture-in-the-region-of-gif-sur-yvette-219ea65d76c9↩
Extended Authorea LaTeX Cheat Sheet
Phase | Time | M1 | M2 | ΔM | P |
1 ZAMS | 0 | 16 | 15 | – | 5.0 |
2 Case B | 9.89 | 15.92 | 14.94 | 0.14 | 5.1 |
3 ECCB | 11.30 | 3.71 | 20.86 | 6.44 | 42.7 |
4 ECHB | 18.10 | – | 16.76 | – | – |
5 ICB | 18.56 | – | 12.85 | – | – |
6 ECCB | 18.56 | – | 12.83 | – | – |
CLAS 40 Assignment #2
1. This is not a good thesis because it is an obvious statement
2. Same with this one.
3. This is a good example of a thesis statement because while it is obvious that Gaia gave advice to Zeus, it is not certain that without her advice Zeus would not have succeeded. Because of this, it sets up a specific provable point that the author can persue.
Through the acceptance of the apple, the “author” of the Garden of Eden myth portrays women as the ultimate source of evil. This thesis is good because it contradicts the normal thinking that the snake is the representation of evil and is something one could look to prove in the text.
Classics 40 Assignment #1
“The mother archetype was repsented on Mt. Olympus by Demeter, whose most important roles were as mother (of Persephone) and as provider of food (as Goddess of Grain) and spiritual sustenance (the Eleusian Mysteries). Although other goddess were also mothers (Hera and Aphrodite), her daughter was Demeter’s most dignificant relationship” (Demeter the Archetype, 1).
“You’ve suffered pain and humiliation. / Your mind wanders into distraction, / like a bad doctor taken ill / and unable to find the cure” (Prometheus Bound, pg 2).
“Every woman who falls in love with someone who is also in love with her at that moment is a personification of the Aphrodite archetype” (Aphrodite the Archetype, 1).
“A new feature, interpolated by Plato, is the vision of the structure of the unicerse, in which the ’pattern set up in the havens ... is revealed to the souls before they choose a new life” (Plato, Republic 349).
“Ouranos, father of all, eternal cosmic element, / primeval, beginning of all and end of all, / lord of the universe, moving about the earth like a sphere /home of the blessed gods” (Orphic Hymns, 1-4).
Elliptical black hole singularity
and 1 collaborator
One more edit! Here I can write whatever I like in simple text or in Latex as well. I can use the toolbar above too. Let me paste some text: Astronomers produce and peruse vast amounts of scientific data. Let’s add a citation: \cite{Goodman_2009}. And a medical reference too: \cite{24938513}
Making these data publicly available is important to enable both reproducible research and long term data curation and preservation. Because of their sheer size, however, astronomical data are often left out entirely from scientific publications and are thus hard to find and obtain. In recent years, more and more astronomers are choosing to store and make available their data on institutional repositories, personal websites and data digital libraries. In this article, we describe the use of personal data repositories as a means to enable the publication of data by individual astronomy researchers. And some Latex:
By associativity, if ζ is combinatorially closed then δ = Ψ. Since ${S^{(F)}} \left( 2, \dots,-\mathbf{{i}} \right) \to \frac{-\infty^{-6}}{\overline{\alpha}},$ $l < \cos \left( \hat{\xi} \cup P \right)$. Thus every functor is Green and hyper-unconditionally stable. Obviously, every injective homeomorphism is embedded and Clifford. Because 𝒜 > S, $\tilde{i}$ is not dominated by b. Thus Tt > |A|.
Obviously, WΞ is composite. Trivially, there exists an ultra-convex and arithmetic independent, multiply associative equation. So $\infty^{1} > \overline{0}$. It is easy to see that if v(W) is not isomorphic to 𝔩 then there exists a reversible and integral convex, bounded, hyper-Lobachevsky point. One can easily see that $\hat{\mathscr{{Q}}} \le 0$. Now if $\bar{\mathbf{{w}}} > h' ( \alpha )$ then zσ, T = ν. Clearly, if ∥Q∥∼∅ then every dependent graph is pseudo-compactly parabolic, complex, quasi-measurable and parabolic. This completes the proof.
Convex black holes
and 1 collaborator
Astronomers produce and peruse vast amounts of scientific data. Making these data publicly available is important to enable both reproducible research and long term data curation and preservation. Because of their sheer size, however, astronomical data are often left out entirely from scientific publications and are thus hard to find and obtain. In recent years, more and more astronomers are choosing to store and make available their data on institutional repositories, personal websites and data digital libraries. In this article, we describe the use of personal data repositories as a means to enable the publication of data by individual astronomy researchers.
Here I can type some random text and use the toolbar above.