Wednesday, 11 September 2013

Battery History

A battery, which is actually an electric cell, is a device that produces electricity from a chemical reaction. In a one cell battery, you would find a negative electrode; an electrolyte, which conducts ions; a separator, also an ion conductor; and a positive electrode.

Timeline of Battery History

  • 1748 - Benjamin Franklin first coined the term "battery" to describe an array of charged glass plates.
  • 1780 to 1786 - Luigi Galvani demonstrated what we now understand to be the electrical basis of nerve impulses and provided the cornerstone of research for later inventors like Volta to create batteries.
  • 1800 Voltaic Pile - Alessandro Volta invented the Voltaic Pile and discovered the first practical method of generating electricity. Constructed of alternating discs of zinc and copper with pieces of cardboard soaked in brine between the metals, the Voltic Pile produced electrical current. The metallic conducting arc was used to carry the electricity over a greater distance. Alessandro Volta's voltaic pile was the first "wet cell battery" that produced a reliable, steady current of electricity.
  • 1836 Daniell Cell- The Voltaic Pile could not deliver an electrical current for a long period of time. Englishman, John F. Daniell invented the Daniell Cell that used two electrolytes: copper sulfate and zinc sulfate. The Daniel Cell lasted longer then the Volta cell or pile. This battery, which produced about 1.1 volts, was used to power objects such as telegraphs, telephones, and doorbells, remained popular in homes for over 100 years.
  • 1839 Fuel Cell - William Robert Grove developed the first fuel cell, which produced electrical by combining hydrogen and oxygen.
  • 1839 to 1842 - Inventors created improvements to batteries that used liquid electrodes to produce electricity. Bunsen (1842) and Grove (1839) invented the most successful.
  • 1859 Rechargeable - French inventor, Gaston Plante developed the first practical storage lead-acid battery that could be recharged (secondary battery). This type of battery is primarily used in cars today.
  • 1866 Leclanche Carbon-Zinc Cell - French engineer, Georges Leclanche patented the carbon-zinc wet cell battery called the Leclanche cell. According to The History of Batteries: "George Leclanche's original cell was assembled in a porous pot. The positive electrode consisted of crushed manganese dioxide with a little carbon mixed in. The negative pole was a zinc rod. The cathode was packed into the pot, and a carbon rod was inserted to act as a currency collector. The anode or zinc rod and the pot were then immersed in an ammonium chloride solution. The liquid acted as the electrolyte, readily seeping through the porous cup and making contact with the cathode material. The liquid acted as the electrolyte, readily seeping through the porous cup and making contact with the cathode material." Georges Leclanche then further improved his design by substituting the ammonium chloride paste for liquid electrolyte and invented a method of sealing the battery, inventing the first dry cell, an improved design that was now transportable.
  • 1881 - J.A. Thiebaut patented the first battery with both the negative electrode and porous pot placed in a zinc cup.
  • 1881 - Carl Gassner invented the first commercially successful dry cell battery (zinc-carbon cell).
  • 1899 - Waldmar Jungner invented the first nickel-cadmium rechargeable battery.
  • 1901 Alkaline Storage - Thomas Alva Edison invented the alkaline storage battery. Thomas Edison's alkaline cell had iron as the anode material (-) and nickelic oxide as the cathode material (+).
  • 1949 Alkaline-Manganese Battery - Lew Urry developed the small alkaline battery in 1949. The inventor was working for the Eveready Battery Co. at their research laboratory in Parma, Ohio. Alkaline batteries last five to eight times as long as zinc-carbon cells, their predecessors.
  • 1954 Solar Cells - Gerald Pearson, Calvin Fuller and Daryl Chapin invented the first solar battery. A solar battery converts the sun's energy to electricity. In 1954, Gerald Pearson, Calvin Fuller and Daryl Chapin invented the first solar battery. The inventors created an array of several strips of silicon (each about the size of a razorblade), placed them in sunlight, captured the free electrons and turned them into electrical current. Bell Laboratories in New York announced the prototype manufacture of a new solar battery. Bell had funded the research. The first public service trial of the Bell Solar Battery began with a telephone carrier system (Americus, Georgia) on October 4 1955.
  • 1964 - Duracell was incorporated.

Syria Crises

The crisis in Syria was prompted by protests in mid-March 2011 calling for the release of political prisoners. National security forces responded to widespread, initially peaceful demonstrations with brutal violence. From summer 2011 onwards, Syrian President Bashar al-Assad refused to halt attacks and implement the meaningful reforms demanded by protestors. In July 2011, accounts emerged from witnesses, victims, the media, and civil society that government forces had subjected civilians to arbitrary detention, torture, and the deployment and use of heavy artillery. The Syrian people were also subject to the Shabiha, a heavily armed state-sponsored militia fighting alongside security forces. Assad consistently denied responsibility for these crimes, placing blame for the violence on armed groups and terrorists, and yet denying humanitarian access to civilians. Alongside the worsening violence, this lack of assistance from the UN and non-governmental organizations (NGOs) caused severe shortages of food, water and healthcare in the country.
 
As the crisis continued to escalate, opponents of the Assad regime began to loosely organize, creating several opposition organizations such as the Syrian National Council (SNC), an umbrella organization of exiled Syrians, and the Free Syrian Army (FSA), a militarized element largely composed of Syrian military defectors and armed rebels. Though ideological divisions have characterized the fragmented opposition, many groups lost patience with the lack of progress on reforms andbegan to call the regime’s resignation since the fall of 2011. In August 2011, the FSA began attacking Syrian soldiers with force, marking the first time that the opposition resorted to violence to overthrow the regime and end the government’s widespread attacks on civilians. With the introduction of these new militant tactics came reports that opposition forces had also subjected civilians to human rights violations. The conflict has also grown increasingly sectarian in nature, with civilians being targeted by both sides based on the presumed support or opposition to the Assad regime associated with their ethnic identities.
 
Though the government-imposed media blackout since March 2011 has made external confirmation of the situation in Syria very difficult, the United Nations Human Rights Council established an independent International Commission of Inquiry in September 2011 to investigate the alleged human rights violations. The Commission has since produced five reports and concluded that the Syrian government and Shabiha committed crimes against humanity and war crimes, as well as that anti-government groups have been responsible for war crimes and crimes against humanity. 
 
Initially, regional actors, such as the League of Arab States and the Gulf Cooperation Council, were hesitant to respond, and the United Nations Security Council (UNSC) was unable to reach a consensus on decisive action to end the conflict. After nearly a year of fighting, former Secretary-General Kofi Annan was appointed as Joint Special Envoy for the UN and the League of Arab States on 23 February 2012. Annan quickly set forth a six point peace plan, which included commitments from all parties to work with the Special Envoy, a ceasefire, and the access to and timely provision of humanitarian assistance. Unfortunately, the deal failed to end the violence, which by then had become a de facto civil war. The UNSC then authorized the deployment of a UN Supervision Mission in Syria (UNSMIS) of 300 unarmed observers to facilitate the peace plan in April 2012, but activities were suspended in June when the observers’ presence failed to quell the violence. By late July, fatalities and casualties had mounted to 19,000 and tens of thousands of civilians remained displaced, seeking refuge in neighboring countries such as Jordan, Lebanon, Iraq and Turkey. The following month, Annan resigned from his position, citing the lack of political unity within the UN as a major obstacle to finding a solution to the crisis, and was replaced by Lakhdar Brahimi. Brahimi immediately faced a tumultuous situation, with clashes between government and opposition forces fighting for control of Damascus, mass executions by government forces and a growing humanitarian crisis. As of yet, he has been unable to negotiate an end to the violence, and on 29 January 2013, warneda still seriously divided Security Council that the country was “breaking up before everyone's eyes.” 
 
In November 2012, opposition forces came together to create a new coalition to be more inclusive and representative of the Syrian opposition. The new body was named theNational Coalition for Syrian Revolutionary and Opposition Force, and has steadily gathered international recognition as the legitimate representative of the Syrian people. The Coalition was first recognized by the Arab League on 12 November 2012, then joined by France on 13 November, EU on November 19, UK on 20 November, US on December 12, and then others.Nonetheless, a political solution has not yet been put forward. Nonetheless, the opposition forces grew increasingly fragmented, divided by ideological barriers between secular and Islamist armed groups as well as pervasive managerial gaps. In April 2013, Al Qaeda announced its allegiance to the rebel forces and foreign fighters enlisted in the sectarian war in increasing numbers.  In July 2013, the UN reported that at least 100,000 had been killed, that a staggering 2 million had fled to neighboring countries, and 4 million remained internally displaced..
 
As early as 27 April 2011, then UN Under-Secretary-General for Political Affairs, B. Lynn Pascoe, informed the UN Security Council that sources in Syria had reported “the use of artillery fire against unarmed civilians; door-to-door arrest campaigns; the shooting of medical personnel who attempt to aid the wounded; raids against hospitals, clinics and mosques and the purposeful destruction of medical supplies and arrest of medical personnel”. The Syrian government also allegedly denied access to international monitors, humanitarian groups and human rights organizations while simultaneously shutting offlocal social media communications. Based on interviews, the Commission of Inquiry on Syria presented its initial findings to the Council in Geneva on 28 November 2011, reporting evidence that crimes against humanity had been committed by military and security forces including: sexual violence, torture, arbitrary detention and murder. 
 
Government attacks also shifted from sporadic violence to targeted large-scale killings. The spiraling levels of violence were implemented using cluster bombs, which are prohibited in many states around the world. Among other instances of violence with a high civilian death toll, 108 people died an attack on Houla in July 2012 and 71 men were massacred in Aleppo in January 2013. In a 3 July report from Human Rights Watch, witnesses reported the use of torture in 27 detention facilities run by Syrian intelligence agencies, including at the hands of the commanders in charge. Human Rights Watch also released evidence of a clear chain of command responsibility for atrocities committed all the way up to high-level Syrian officials. Subsequent reports by the Commission of Inquiry also established that gross violations of human rights had been committed by Syrian military and security forces, and stressed the harmful effects on the population, calling for an end to impunity. Furthermore, there have been repeated claims of the deployment of chemical agents by the Syrian government, which is known to have stockpiles of such weapons. Individual states such as United States, France, United Kingdom, have announced that they have evidence of the government’s responsibility to the use of chemical weapons. 
 
The FSA also reportedly failed to comply with international human rights and humanitarian law, according to evidence from civil society groups including Human Rights Watch, which reported in March 2012 that FSA forces had committed human rights abuses against civilians including extra judicial killing, capture and torture. On 25 July 2012, Amnesty International released a press statement saying that opposition groups have been deliberately and unlawfully killing captured opponents in Syria, and called on all opposition parties to abide by international humanitarian law. In response to widespread concern, the FSA officials signeda “code of conduct” in August 2012 pledging to refrain from torture, attacking of civilians and other human rights abuses. This coincided with the Commission of Inquiry’s report on 15 August, which concluded that opposition forces had committed war crimes and crimes against humanity. Meanwhile, Syrian lawyers attempted to collect war crimes testimonies from civilians in the hope to prosecute crimes under international law once the conflict ends. In March 2013, the government accused rebel forces of using chemical weapons in Aleppo, though the opposition maintained that government forces had actually deployed them. The UN Secretary-General responded by setting up team of scientists to investigate the allegations, which will investigate three sites where chemical weapons have allegedly been used.
 Russia then announced in early July 2013 that it had evidence that anti-government armed groups had used sarin gas against government forces, though neither the UN nor the Commission of Inquiry have confirmed.
 
On 17 May 2013 the UN Refugee agency announced that the number of Syrian refugees had surpassed1.65 million. Meanwhile, the UN announced that the number of internally displaced people in Syria had reached 4.5 million.The refugee crisis has taken its toll on bordering countries. According to the UNHCR as of May 2013, almost 418,000 have sought refuge in Turkey, and over 512,000 in Jordan, 561,000 in Lebanon, 103,000 in Egypt and 161,000 in Iraq. The primary reasons for fleeing appear to be atrocities committed against civilians, reported the International Rescue Committee in January 2013, with journeys to safety made more treacherous by winter conditions. Foreign Affairs reported  on 24 May 2013 that refugee populations faced dire conditions in the refugee camps. Furthermore, the hosting countries are experiencing severe pressure. In Jordan, the World Bank reported  in May that the influx of refugees is affecting the livelihood, public services, and basic commodities of the local communities, and announced its financial support to the government. This economic impact created resentment against the refugees within the country. Although Turkey has been more economically equipped to handle the refugee influx,  social and political costs arose, resulting in an increased risk of sectarian spillover, particularly in areas sympathetic to Assad. Lebanon, is also significantly burdened by the conflict, where in April 2013, Reuters reported that one out of ten residents was a refugee. Lebanon’s hospitals, electricity, transportation systems are strained, and food prices are rising. Furthermore, the country is experiencing a re-ignition of ethnic and religious tensions, with violent clashes occurring between Sunni and Shiite communities and between supporters and opponents of Assad. Civil society actors, including Refugee International, warned on 10 July that Syria’s neighbors’ had been “stretched too thin”, and called for urgent assistance and increased funding from the international community.  On 24 May, the UN, acknowledging the strain on Syria’s neighbors, nonetheless urged the countries to continue keeping their borders open for refugees.
 
a) Regional Responses
 
The League of Arab States
The League of Arab States (LAS) initially stressed that it would not take unilateral action in response to the crisis. However, after nearly nine months of violence against civilians, the League introduced a peace plan, which called on the government to halt violence, release prisoners, allow for media access and remove military presence from civilian areas. When the government failed to uphold the plan in spite of its initial agreement to do so, the League suspended Syria’s membership on 12 November 2011 and imposed economic sanctions on 27 November 2011. On 19 December, Syria signed a peace deal, mandating an Arab observer mission to observe and report on the crisis, but the League suspended the mission on 29 January 2012 due to “critical” conditions in the country. The League then encouraged the Security Council to take further action and appointed a Joint Special Envoy with the UN to facilitate a political solution to the crisis. In November 2012 the League, alongside the Gulf Cooperation Council, recognized the National Coalition of the Syrian Opposition, an opposition organization formed that same month from various opposition groups in order to have a more inclusive and representative model, as the “the legitimate representative and main interlocutor with the Arab League and GCC”.The Coalition officially took Syria’s seat at the summit of the Arab League in March 2013.
 
The European Union
The European Union (EU) imposed economic sanctions, including an arms embargo, visa ban and asset freeze, against the Syrian regime in May 2011, and has heightened the sanctions periodically since then. In November 2012 the EU recognized the National Coalition of the Syrian Opposition as the legitimate representative of the Syrian people, and subsequently released a statement calling for Assad to step down to allow for political transition in January 2013. In March 2013 the EU foreign ministersmodifiedthese sanctions, making it possible for European governments to bypass the ban on providing "non-lethal" supplies to the opposition. On 28 May 2013, the European States effectively ended the arms embargo on the opposition in Syria and opened up the possibility to arm anti-government rebels while upholding the arms embargo on the Assad government. Only the United Kingdom and France have expressed the possibility of sending arms, while the majority of the remaining EU member-states are worried that further militarization will only fuel more violence.

b) United Nations Responses
 
Special Advisers on the Prevention of Genocide and the Responsibility to Protect
The Special Advisers of the Secretary-General on the Prevention of Genocide and on the Responsibility to Protect voiced their concern over the Syrian government’s systematic widespread attacks targeting civilians and reminded the government of its responsibility to protect its population in a series of public statements. Notably, in their fifth statement, released on 14 June 2012, they called on the international community “to take immediate, decisive action to meet its responsibility to protect populations at risk of further atrocity crimes in Syria, taking into consideration the full range of tools available under the United Nations Charter” including a referral of the situation by the Security Council to the International Criminal Court (ICC). On 8 July 2013, the UN Special Adviser on the Prevention on Genocide, Mr. Dieng, warned against the increasing use of rhetoric by political and religious leaders in the Middle East and North Africa region as it could be used to incite further violence in Syria.
 
Human Rights Council and Office of the High Commissioner for Human Rights
As discussed above, the Human Rights Council and Office of the High Commissioner for Human Rights were seized of the crisis early on and in August 2011 mandated an independent Commission of Inquiry to investigate human rights violations in Syria. High Commissioner for Human Rights, Navi Pillay, also repeatedly called upon the Syrian government to assume its responsibility to protect and prevent and prosecute perpetrators of international crimes, and has repeatedly urged the Security Council to refer the case to the ICC, beginning in December 2011.
 
Security Council
The Security Council received criticism for failing to address the crisis for over five months after protests had begun, and its subsequent inability to reach a consensus on how to move forward. The Council began with a cautious approach, not wanting to violate the UN charter and aid a civil war; however, as the situation progressed, several attempts at resolutions to bring an end to the conflict, were vetoed by Russia and China.. With the appointment of a special envoy and establishment of the United Nations Supervision Mission in Syria (UNSMIS), the Council attempted to take preventive action. Unfortunately, the situation had escalated already to a point of extreme violence with very limited room for political negotiations between the disputing parties. As such, UNSMIS immediately faced many technical difficulties on and off the ground, including limited freedom of movement by the government, blocked access to sites of mass violence, and the rejection of observers’ visas. These, alongside ongoing violence, led to the Misson’s suspension on 15 June 2012.
 
In October 2012 the Security Council issued press statements condemning the terrorist attacks in Aleppo and later calling on all relevant parties to implement a ceasefire in honor of Eid al- Adha, Since then, the Council has received significant pressure to refer the case in Syria to the ICC, including from the High Commissioner for Human Rights and the Special Advisers on the Prevention of Genocide and RtoP as well as from over 50 Member States who signed a letter saying as much in mid-January 2013.
 
General Assembly
The General Assembly adopted several resolutions calling for all parties to support efforts to peacefully resolve the crisis. In an attempt to pressure the Security Council to act, the General Assembly requested that UN Secretary-General Ban Ki-moon brief the Council in January 2013. At this time, the Secretary-General stated: “We must do everything we can to reach Syrians in need. We must intensify our efforts to end the violence through diplomacy, overcoming the divisions within Syria, the region and the Security Council.” On May 15 2013, the General Assembly adopted a resolution condemning the conflict’s escalation, violations of humanitarian law, and violence, demanding the government meet their responsibility to protect their population, comply with international law, and cooperate with the Commission of Inquiry to investigate claims of chemical weapons. The resolution also asked the Secretary-General to report on the resolution’s implementation to the GA within 30 days.
 
c) National Actor Responses
 
Russia and China attracted significant criticism from Arab and Western leaders for their economic, political and military ties to Syria, and because they vetoed three UN Security Council resolutions which had included language citing the responsibility of the Assad government. Separately, Russia made attempts at unilateral diplomacy with a view to put pressure on the Assad regime to limit its military actions against civilians and allow for some sort of political transition, and in December 2012 publically acknowledged that the Assad regime may well be losing control of the country. In early May 2013, Russia announced its plans to hold a Syria peace conference together with the United States to broker a peace agreement; however, the conference has been repeatedly delayed due to discussions on the structure and attendees of the meeting, with the most recent information indicating the peace talks are unlikely to take place before September.
 
Turkey’s border with Syria has seen skirmishes and shelling since July 2012, and in October, five Turkish civilians were killed by Syrian mortar fire, which Turkey responded to with proportional arms. In February 2013, an explosion on the Syrian-Turkish border killed at least 13 people, putting further strain on the deteriorating relationship between the two. After another instance of car bombs on the border in May 2013, killing 43, Turkey warned it would take all steps necessary to protect itself.  Though NATO had originally stated it would not intervene in the Syrian crisis, the Organization placed patriot missiles on Turkey’s border with Syria in January 2013 to defend against external attack, at the request of the Turkish government. Anders Fogh Rasmussen, NATO’s secretary general,had previouslywarnedthe Syrian government on 3 December 2012 that the international community would not stand by if the Assad regime unleashed chemical warfare against the Syrian people.
 
Fighting between Syrian government forces and rebels in the Golan Heights has meanwhile challenged the decades-long ceasefire between Syria and Israel in the Golan Heights and complicated the operations of the United Nations Disengagement Observer Force (UNDOF), charged with monitoring the accord. After two abductions of UNDOF peacekeepers in March and May 2013, the UN has had to contend with troop-contributing countries (TCCs) withdrawing their peacekeepers from UNDOF out of concern for their safety.
 
Though Lebanon has long had an official policy of disassociation in the Syrian conflict, the influx of refugees and increased cross-border fire from Syria has threatened to embroil the country in its neighbor’s crisis. More recently, the announced entry of Lebanese political and military group Hezbollah and their key role in helping the Syrian government re-take the town of Qusayr in June 2013 represents the starkest indication yet that the Syrian crisis is slowly devolving into a full-scale regional crisis.
 
As the conflict wears on, without distinctive action from international organizations, several national actors have also increased their support to the Syrian opposition politically, economically and militarily. The Free Syrian Army received a steady stream of non-military assistance and then non-lethal military equipment and funding from several governments, including the United States, United Kingdom, Turkey, Saudi Arabia and Qatar, beginning in June 2012.

VII. Latest Developments:

In August 2013, a series of videos, photographs and reports from the ground from Syria indicate that a new chemical weapons attack has killed a high number of civilians in rebel-held areas outside of Damascus, in what, if verified, would be “the world's most lethal chemical weapons attack since the 1980s.”   The footage shows a large number of children among the victims of the attacks. Casualty estimates have varied widely, from 500 to over 1300.
 
The international community called for an immediate investigation of the use of chemical weapons after the attack on civilians. The UN Secretary General has stated that “it is his intention to conduct a thorough, impartial and prompt investigation on the reports of the alleged use of chemical weapons during these attacks.” The team of United Nations inspectors probing the possible use of chemical weapons returned  from Syria on 31 August after two weeks of investigation. The team's results are expected in the coming days and could figure significantly in the international community's response to the crisis. 
 
The UN Security Council convened for an emergency session on 21 August to discuss the attacks. However, it remains to be seen whether the Council’s divisions on Syria will be overcome by the latest reports of mass atrocities. Many voices inside and outside of Syria have stated that the international community and the Syrian government have both failed in their responsibility to protect Syrians, a failure that, particularly in light of the alleged attacks, has reached what some are calling a tipping point and requires an immediate and meaningful response.

Some states, already convinced that the Assad regime is behind the attacks, have declared publicly that a “red line” has been crossed and are therefore seriously considering a military operation in order to respond to the chemical weapons attack, which if verified, would constitute another war crime and a violation of international humanitarian law. However, others have questioned whether a military response solely in response to the 21 August chemical weapons attack would actually serve to protect civilians, or if it would be mostly designed to punish the Assad regime. 
 

Thursday, 30 May 2013

Virus

Viruses have existed as long as life has been on earth.
Early references to viruses

Early references to viral infections include Homer’s mention of “rabid dogs”. Rabies is caused by a virus affecting dogs. This was also known in Mesopotamia.

Polio is also caused by a virus. It leads to paralysis of the lower limbs. Polio may also be witnessed in drawings from ancient Egypt.

In addition, small pox caused by a virus that is now eradicated from the world also has a significant role in history of S. and Central America.
Virology – the study of viruses

The study of viruses is called virology. Experiments on virology began with the experiments of Jenner in 1798. Jenner did not know the cause but found that that individuals exposed to cow pox did not suffer from small pox.

He began the first known form of vaccination with cow pox infection that prevented small pox infection in individuals. He had not yet found the causative organism or the cause of the immunity as yet for either cow pox or small pox.
Koch and Henle

Koch and Henle founded their postulates on microbiology of disease. This included that:
the organism must regularly be found in the lesions of the disease
it must be isolated from diseased host and grown in pure culture
inoculation of such a pure organism into a host should initiate the disease and should be recovered from the secondarily infected organism as well

Viruses do not confer to all of these postulates.
Louis Pasteur

In 1881-1885 Louis Pasteur first used animals as model for growing and studying viruses. He found that the rabies virus could be cultured in rabbit brains and discovered the rabies vaccine. However, Pasteur did not try to identify the infectious agent.
The discovery of viruses

1886-1903 – This period was the discovery period where the viruses were actually found. Ivanowski observed/looked for bacteria like substance and in 1898, Beijerink demonstrated filterable characteristic of the virus and found that the virus is an obligate parasite. This means that the virus is unable to live on its own.
Charles Chamberland and filterable agents

In 1884, the French microbiologist Charles Chamberland invented a filter with pores smaller than bacteria. Chamberland filter-candles of unglazed porcelain or made of diatomaceous earth (clay)-kieselguhr had been invented for water purification. These filters retained bacterium, and had a pore size of 0.1-0.5 micron. Viruses were filtered through these and called “filterable” organisms. Loeffler and Frosch (1898) reported that the infectious agent of foot and mouth diseases virus was a filterable agent.

In 1900 first human disease shown to be caused by a filterable agent was Yellow Fever by Walter Reed. He found the yellow fever virus present in blood of patients during the fever phase. He also found that the virus spread via mosquitoes. In 1853 there was an epidemic in New Orleans and the rate of mortality from this infection was as high as 28%. Infectivity was controlled by destroying mosquito populations
Trapping viruses

In the 1930's Elford developed collodion membranes that could trap the viruses and found that viruses had a size of 1 nano meter. In 1908, Ellerman and Bang demonstrated that certain types of tumors (leukemia of chicken) were caused by viruses. In 1911 Peyton Rous discovered that non-cellular agents like viruses could spread solid tumors. This was termed Rous Sarcoma virus (RSV).
Bacteriophages

The most important discovery was that of the Bacteriophage era. In 1915 Twort was working with vaccinia virus and found that the viruses grew in cultures of bacteria. He called then bacteriophage. Twort abandoned this work after World War I. In 1917, D'Herelle, a Canadian, also found similar bacteriophages.
Images of viruses

In 1931 the German engineers Ernst Ruska and Max Knoll found electron microscopy that enabled the first images of viruses. In 1935, American biochemist and virologist Wendell Stanley examined the tobacco mosaic virus and found it to be mostly made from protein. A short time later, this virus was separated into protein and RNA parts. Tobacco mosaic virus was the first one to be crystallised and whose structure could therefore be elucidated in detail.
Molecular biology

Between 1938 and 1970 virology developed by leaps and bounds into Molecular biology. The 1940's and 1950's was the era of the Bacteriophage and the animal virus.

Delbruck considered father of modern molecular biology. He developed the concepts of virology in the science. In 1952 Hershey and Chase showed that it was the nucleic acid portion that was responsible for the infectivity and carried the genetic material.

In 1954 Watson and Crick found the exact structure of DNA. Lwoff in 1949 found that virus could behave like a bacterial gene on the chromosome and also found the operon model for gene induction and repression. Lwoff in 1957 defined viruses as potentially pathogenic entities with an infectious phase and having only one type of nucleic acid, multiplying with their genetic material and unable to undergo binary fission.

In 1931, American pathologist Ernest William Goodpasture grew influenza and several other viruses in fertilised chickens' eggs. In 1949, John F. Enders, Thomas Weller, and Frederick Robbins grew polio virus in cultured human embryo cells, the first virus to be grown without using solid animal tissue or eggs. This enabled Jonas Salk to make an effective polio vaccine.

Era of polio research was next and was very important as in 1953 the Salk vaccine was introduced and by 1955 poliovirus had been crystallized. Later Sabin introduced attenuated polio vaccine.

In the 1980’s cloning of viral genes developed, sequencing of the viral genomes was successful and production of hybridomas was a reality. The AIDS virus HIV came next in the 1980’s. Further uses of viruses in gene therapy developed over the next two decades.

Sunday, 19 May 2013

Solar Energy


The Basics
Solar energy technologies convert the sun’s light into usable electricity or heat. Solar energy systems can be divided into two major categories: photovoltaic and thermal. Photovoltaic cells produce electricity directly, while solar thermal systems produce heat for buildings, industrial processes or domestic hot water. Thermal systems can also generate electricity by operating heat engines or by producing steam to spin electric turbines. Solar energy systems have no fuel costs, so most of their cost comes from the original investment in the equipment. The total installed costs of solar applications vary depending on the type of financing used. Solar photovoltaics generally range from $6-$10 per watt installed, or $12,000-$30,000 for a typical 2-3 kilowatt residential-scale system. A solar hot water system sized for a typical home is much cheaper and costs between $3,500 and $8,000 depending on the size and type of the system (above prices exclude any incentives or rebates). 
 
Resource Potential
The Northwest receives more than enough sunlight to meet our entire energy needs for the foreseeable future. As the map above illustrates, the Northwest’s highest potential is in southeastern Oregon and southern Idaho; however, there are no “bad” solar sites—even the rainiest parts of the Northwest receive almost half as much solar energy as the deserts of California and Arizona, and they receive more than Germany, which has made itself a solar energy leader.
 
Photovoltaic Cells
Photovoltaics (PVs) convert sunlight directly into electricity, using semiconductors made from silicon or other materials. Photovoltaic modules mounted on homes in the Northwest can produce electricity at a levelized cost of 20-60 cents per kilowatt-hour (kWh) before incentives. Incentives can bring the levelized cost down considerably to 10-20 cents per kWh.
 
PVs generate power on a much smaller scale than traditional utility power plants, so they can often provide high-value electricity exactly where and when it is needed. PVs are often the best choice for supplying power for remote, “off-grid” sites or in situations where the transmission or distribution system would otherwise need to be upgraded in order to meet peak demands. Distribution line extensions of more than half a mile are generally more expensive than investing in a PV system for a typical home.
 
Other cost-effective PV applications include building-integrated power generation, meeting high summer demand for electricity (e.g., air conditioning), pumping water, lighting signs and powering equipment used for communications, safety or signaling.
 
Prices for photovoltaics are falling as markets expand. Solar PV demand has grown consistently by 20-25% per year over the past 20 years while solar cell prices fell from $27 per watt of capacity in 1982 to less than $4 per watt today.
 
Direct Thermal
Direct-use thermal systems are usually located on individual buildings, where they use solar energy directly as a source of heat. The most common systems use sunlight to heat water for houses or swimming pools, or use collector systems or passive solar architecture to heat living and working spaces. These systems can replace electric heating for as little as three cents per kilowatt-hour, and utility and state incentives reduce the costs even further in some cases.
 
Environmental Impacts
Solar power is an extremely clean way to generate electricity. There are no air emissions associated with the operation of solar modules or direct application technologies. Residential-scale passive construction, photovoltaic, solar water heating, and other direct applications reduce power generation from traditional sources and the associated environmental impacts.
 
Net Metering
Utilities in all four Northwestern states offer net metering programs, which make it easy for customers to install solar electric systems at their homes or businesses. In a net metering program, customers feed extra power generated by their solar equipment during the day into the utility’s electrical grid for distribution to other customers. Then, at night or other times when the customer needs more power than their system generates, the building draws power back from the utility grid.
 
Net metering allows customers to install solar equipment without the need for expensive storage systems, and without wasting extra power generated when sunlight is at its peak. Such programs also provide a simple, standardized way for customers to use solar systems while retaining access to utility-supplied power.
 
In most net metering programs, the utility installs a special ‘dual-reading’ meter at the customers building which keeps track of both energy consumed by the building, and energy generated by the solar array. The customer is billed only for the net amount of electricity that they draw from the utility, effectively receiving the utility’s full retail price for the electricity they generated themselves.
 
Annual U.S. Solar Installations
by Technology

Source: Interstate Renewable Energy Council 6
 
Net metering is available from utilities throughout Oregon and Washington, and law requires most Montana utilities to offer it as well. Additionally, Idaho Power and Rocky Mountain Power offer net metering in Idaho in accord with a Public Utilities Commission rule.
 
Incentive Programs in the Northwest
Every state in the Northwest offers incentives for solar energy development. Oregon, Idaho and Montana all offer low-interest loans and substantial tax credits for solar systems bought by businesses, individuals or governments. Washington now offers a production incentive of $0.15/kilowatt-hour or more for electricity from solar energy, depending on where the technology was manufactured. Montana and Oregon also exempt solar systems from property tax assessment, while Idaho and Washington exempt solar system purchases from sales taxes. Many local utilities and regional organizations also provide incentives. For example, the Energy Trust of Oregon offers additional rebates and loans to customers of Oregon’s two largest utilities and many utilities offer additional rebates, loans, or production incentives for solar energy systems.

Saturday, 18 May 2013

Flying Cars


FLYING CARS:



Just a decade and a half after the Wright Brothers took off in their airplane over the plains of Kitty Hawk, N.C., in 1903, other pioneering men began chasing the dream of a flying car. There was even one attempt in the 18th century to develop a gliding horse cart, which, to no great surprise, failed. There are nearly 80 patents on file at the United States Patent and Trademark Office for various kinds of flying cars. Some of these have actually flown. Most have not. And all have come up short of reaching the goal of the mass-produced flying car. Here's a look back at a few of the flying cars that distinguished themselves from the pack:
Curtiss Autoplane - In 1917, Glenn Curtiss, who could be called the father of the flying car, unveiled the first attempt at such a vehicle. His aluminum Autoplane sported three wings that spanned 40 feet (12.2 meters). The car's motor drove a four-bladed propeller at the rear of the car. The Autoplane never truly flew, but it did manage a few short hops.
Arrowbile - Developed by Waldo Waterman in 1937, the Arrowbile was a hybrid Studebaker-aircraft. Like the Autoplane, it too had a propeller attached to the rear of the vehicle. The three-wheeled car was powered by a typical 100-horsepower Studebaker engine. The wings detached for storage. A lack of funding killed the project.
Airphibian - Robert Fulton, who was a distant relative of the steam engine inventor, developed the Airphibian in 1946. Instead of adapting a car for flying, Fulton adapted a plane for the road. The wings and tail section of the plane could be removed to accommodate road travel, and the propeller could be stored inside the plane's fuselage. It took only five minutes to convert the plane into a car. The Airphibian was the first flying car to be certified by the Civil Aeronautics Administration, the predecessor of the the Federal Aviation Administration (FAA). It had a 150-horsepower, six-cylinder engine and could fly 120 miles per hour and drive at 50 mph. Despite his success, Fulton couldn't find a reliable financial backer for the Airphibian.
ConvAirCar - In the 1940s, Consolidated-Vultee developed a two-door sedan equipped with a detachable airplane unit. The ConvAirCar debuted in 1947, and offered one hour of flight and a gas mileage of 45 miles (72 kilometers) per gallon. Plans to market the car ended when it crashed on its third flight.
Avrocar - The first flying car designed for military use was the Avrocar, developed in a joint effort between Canadian and British military. The flying-saucer-like vehicle was supposed to be a lightweight air carrier that would move troops to the battlefield.
Aerocar - Inspired by the Airphibian and Robert Fulton, whom he had met years before, Moulton "Molt" Taylor created perhaps the most well-known and most successful flying car to date. The Aerocar was designed to drive, fly and then drive again without interruption. Taylor covered his car with a fiberglass shell. A 10-foot-long (3-meter) drive shaft connected the engine to a pusher propeller. It cruised at 120 mph (193 kph) in the air and was the second and last roadable aircraft to receive FAA approval. In 1970, Ford Motor Co. even considered marketing the vehicle, but the decade's oil crisis dashed those plans
These pioneers never managed to develop a viable flying car, and some even died testing their inventions. However, they proved that a car could be built to fly, and inspired a new group of roadable aircraft enthusiasts. With advances in lightweight material, computer modeling and computer-controlled aircraft, the dream is very close to becoming reality. In the next section, we will look at the flying cars being developed today that eventually could be in our garages.

Aeronautical Engineering


Aeronautical engineering:












The roots of aeronautical engineering can be traced to the early days of mechanical engineering, to inventors’ concepts, and to the initial studies of aerodynamics, a branch of theoretical physics. The earliest sketches of flight vehicles were drawn by Leonardo da Vinci, who suggested two ideas for sustentation. The first was an ornithopter, a flying machine using flapping wings to imitate the flight of birds. The second idea was an aerial screw, the predecessor of the helicopter. Manned flight was first achieved in 1783, in a hot-air balloon designed by the French brothers Joseph-Michel and Jacques-Étienne Montgolfier. Aerodynamics became a factor in balloon flight when a propulsion system was considered for forward movement. Benjamin Franklin was one of the first to propose such an idea, which led to the development of the dirigible. The power-driven balloon was invented by Henri Gifford, a Frenchman, in 1852. The invention of lighter-than-air vehicles occurred independently of the development of aircraft. The breakthrough in aircraft development came in 1799 when Sir George Cayley, an English baron, drew an airplane incorporating a fixed wing for lift, an empennage (consisting of horizontal and vertical tail surfaces for stability and control), and a separate propulsion system. Because engine development was virtually nonexistent, Cayley turned to gliders, building the first successful one in 1849. Gliding flights established a data base for aerodynamics and aircraft design. Otto Lilienthal, a German scientist, recorded more than 2,000 glides in a five-year period, beginning in 1891. Lilienthal’s work was followed by the American aeronaut Octave Chanute, a friend of the American brothers Orville and Wilbur Wright, the fathers of modern manned flight.

Following the first sustained flight of a heavier-than-air vehicle in 1903, the Wright brothers refined their design, eventually selling airplanes to the U.S. Army. The first major impetus to aircraft development occurred during World War I, when aircraft were designed and constructed for specific military missions, including fighter attack, bombing, and reconnaissance. The end of the war marked the decline of military high-technology aircraft and the rise of civil air transportation. Many advances in the civil sector were due to technologies gained in developing military and racing aircraft. A successful military design that found many civil applications was the U.S. Navy Curtiss NC-4 flying boat, powered by four 400-horsepower V-12 Liberty engines. It was the British, however, who paved the way in civil aviation in 1920 with a 12-passenger Handley-Page transport. Aviation boomed after Charles A. Lindbergh’s solo flight across the Atlantic Ocean in 1927. Advances in metallurgy led to improved strength-to-weight ratios and, coupled with a monocoque design, enabled aircraft to fly farther and faster. Hugo Junkers, a German, built the first all-metal monoplane in 1910, but the design was not accepted until 1933, when the Boeing 247-D entered service. The twin-engine design of the latter established the foundation of modern air transport.

The advent of the turbine-powered airplane dramatically changed the air transportation industry. Germany and Britain were concurrently developing the jet engine, but it was a German Heinkel He 178 that made the first jet flight on Aug. 27, 1939. Even though World War II accelerated the growth of the airplane, the jet aircraft was not introduced into service until 1944, when the British Gloster Meteor became operational, shortly followed by the German Me 262. The first practical American jet was the Lockheed F-80, which entered service in 1945.

PEOPLE
TOPICS
A.P.J. Abdul Kalam (president of India)
Alexander M. Lippisch (German-American aerodynamicist)
B.J. Habibie (president of Indonesia)
Ben R. Rich (American engineer)
Bruce McCandless (American naval aviator and astronaut)
Burt Rutan (American aircraft and spacecraft designer)
Charles Lanier Lawrance (American aeronautical engineer)
Charles Stark Draper (American engineer)
Daniel Saul Goldin (American engineer)
Eugen Sänger (Austrian engineer)
George Michael Low (Austrian-born American aerospace engineer)
Georgy Ivanov (Bulgarian cosmonaut)
Hermann Oberth (German scientist)
Hugo Eckener (German aeronautical engineer)
Igor Sikorsky (Russian-American engineer)
Jean-Felix Piccard (American chemical engineer)
Jerome C. Hunsaker (American aeronautical engineer)
Juan de la Cierva (Spanish engineer)
Kelly Johnson (American aeronautical engineer)
Konstantin Eduardovich Tsiolkovsky (Soviet scientist)
Lawrence Hargrave (British aeronautical engineer)
Marcel Dassault (French industrialist)
Max Faget (American engineer)
Michael Griffin (American aerospace engineer)
Octave Chanute (American engineer)
Otto Lilienthal (German aeronautical engineer)
Paul Cornu (French engineer)
Percy Sinclair Pilcher (British engineer)
Qian Xuesen (Chinese scientist)
Robert C. Seamans, Jr. (American aeronautical engineer)
Robert Hutchings Goddard (American scientist)
Samuel Kurtz Hoffman (American engineer)
Samuel Pierpont Langley (American engineer)
Sergey Pavlovich Korolyov (Soviet scientist)
Sergey Vladimirovich Ilyushin (Soviet aircraft designer)
Sir Barnes Wallis (British military engineer)
Theodore von Kármán (American engineer)
Valentin Petrovich Glushko (Soviet scientist)
Vladimir Nikolayevich Chelomey (Soviet scientist)
Walter Robert Dornberger (German engineer)
Wernher von Braun (German-born American engineer)
William Hayward Pickering (American engineer and physicist)
Willy Messerschmitt (German engineer)
Commercial aircraft after World War II continued to use the more economical propeller method of propulsion. The efficiency of the jet engine was increased, and in 1949 the British de Havilland Comet inaugurated commercial jet transport flight. The Comet, however, experienced structural failures that curtailed the service, and it was not until 1958 that the highly successful Boeing 707 jet transport began nonstop transatlantic flights. While civil aircraft designs utilize most new technological advancements, the transport and general aviation configurations have changed only slightly since 1960. Because of escalating fuel and hardware prices, the development of civil aircraft has been dominated by the need for economical operation.

Technological improvements in propulsion, materials, avionics, and stability and controls have enabled aircraft to grow in size, carrying more cargo faster and over longer distances. While aircraft are becoming safer and more efficient, they are also now very complex. Today’s commercial aircraft are among the most sophisticated engineering achievements of the day.

Smaller, more fuel-efficient airliners are being developed. The use of turbine engines in light general aviation and commuter aircraft is being explored, along with more efficient propulsion systems, such as the propfan concept. Using satellite communication signals, onboard microcomputers can provide more accurate vehicle navigation and collision-avoidance systems. Digital electronics coupled with servo mechanisms can increase efficiency by providing active stability augmentation of control systems. New composite materials providing greater weight reduction; inexpensive one-man, lightweight, noncertified aircraft, referred to as ultralights; and alternate fuels such as ethanol, methanol, synthetic fuel from shale deposits and coal, and liquid hydrogen are all being explored. Aircraft designed for vertical and short takeoff and landing, which can land on runways one-tenth the normal length, are being developed. Hybrid vehicles such as the Bell XV-15 tilt-rotor already combine the vertical and hover capabilities of the helicopter with the speed and efficiency of the airplane. Although environmental restrictions and high operating costs have limited the success of the supersonic civil transport, the appeal of reduced traveling time justifies the examination of a second generation of supersonic aircraft.

Thursday, 16 May 2013

Gasoline

The First Oil Well Was Dug Just Before the Civil War
Edwin Drake dug the first oil well in 1859 and distilled the petroleum to produce kerosene for lighting. Drake had no use for the gasoline or other products, so he discarded them. It wasn't until 1892 with the invention of the automobile that gasoline was recognized as a valuable fuel. By 1920, there were 9 million vehicles on the road powered by gasoline, and service stations were popping up everywhere.
A Field of Dozens of Oil Wells Just Offshore, at Summerland, California (Santa Barbara County) in 1915
Photograph of a Field of Dozens of Oil Wells Just Offshore, at Summerland, California (Santa Barbara County) in 1915

Higher Octane and Lead Levels

By the 1950s, cars were becoming bigger and faster. Octane levels increased and so did lead levels; lead was added to gasoline to improve engine performance.

Leaded Gasoline Was Taken Off the U.S. Market

Unleaded gasoline was introduced in the 1970s, when the health problems from lead became apparent. In the United States, leaded gasoline was completely phased out in the 1980s, but it is still being used in some parts of the world.
.

Generators


Today, everybody is familiar with electricity. Let's say, almost everybody uses electricity as a ready-for-use energy that is provided in a clean way. This is the result of long research and engineering work which can be traced back for centuries. The first generators of electricity were not electrodynamic as today's machines, but they were based on electrostatic principles. Long before electrodynamic generators were invented, electrostatic machines and devices had their place in science. Due to their principle of operation, electrostatic generators produce high voltage, but low currents. The output is always a unipolar static voltage. Depending from the used materials, it may be positve or negative.
Friction is the key of the operation! Although most mechanical energy needed to power an electrostatic generator is converted into heat, a fraction of the work (not a fraction of friction - got the point?) is used to generate electric potential by splitting charges.

The Beginnings

In ancient greece, the amber was known to attract little objects after being rubbed with cloth or fur. From the Greek expression elektron, the modern term electricsis directly derived. In 1600, William Gilbert (1544-1603) coined the expression electrica in his famous book De Magnete.
In ancient Greece, there was no effort to mechanize the rubbing of a piece of amber in order to get a continous effect. Although light could be observed in the dark, nobody made a connection between this and the lightning which was regarded as Zeus' weapon. The knowledge about this type of electricity remained almost unchanged until the beginning of the seventeenth century. Several antique authors like Pliny the elder or Renaissance men like Giovanni Battista della Porta describe the effect but without drawing further conclusions.

The Sulphur Ball

Otto von Guericke (1602-1686) who became famous for his Magdeburg vacuum experiments invented a first simple electrostatic generator. It was made of a sulphur ball which rotated in a wooden cradle. The ball itself was rubbed by hand. As the principles of electric conduction had not been discovered yet, von Guericke transported the charged sulphur ball to the place where the electric experiment should happen.

von Guericke's first electrostatic generator around 1660
Guericke made the ball by pouring molten sulphur into a hollow glass sphere. After the sulphur was cold, the glass hull was smashed and removed. Some day, a researcher found out that the empty glass sphere itself provided the same results.

A Baroque Gas Discharging Lamp

In 1730 scientific research has discovered the principles of electric conduction. An inspiriation for electric research came from an area which at the first glance had absoluteley nothing to contribute: the mercury barometric device invented by Evangelista Torricelli. If the mercury-filled tube was shaken and the evacuated portion of the tube was observed in the dark, a light emission could be seen. William Hauksbee, both inventive and inquisitive, designed a rotor to rub a small disk of amber in a vacuum chamber. When the chamber contained some mercury vapour, it lit up! This was the first mercury gas discharge lamp! The engravings show surprising similarities to today's lightning spheres.

Hauksbee's amber rotor

Hauksbee's setup to demonstrate
light effects caused by static electricity.

 The Beer Glass Generator

Glass proved to be an ideal material for an electrostatic generator. It was cheaper than sulphur and could easily be shaped to disks or cylinders. An ordinary beer glass turned out to be a good isolating rotor in Winkler's electrostatic machine.

An electrostatic machine invented by
Johann Heinrich Winkler (1703-1770)
Machines like these were not only made for scientific research, but a preferred toy for amusement. In the 18th century, everybody wanted to experience the electric shock. Experiments like the "electric kiss" were a salon pastime. Although the French Abbé Nollet demonstrated in 1745 that little animals like birds and fish were killed instantaneuosly by the discharge of a Leyden jar, nobody was aware of the latent dangers of this type of experiments.

The electric kiss provided a very special thrill
Soon after the effects of electrostatic discharge were found, researchers and charlatans started to cure diseases with electric shocks. Here we find parallels to the "Mesmerists" who claimed to use magnetic powers for therapy.

Toothache therapy around 1750
Being ill at that time was no fun!

 The Leyden Jar

In 1745, the so-called Leyden Jar (or Leyden Bottle) was invented by Ewald Jürgen von Kleist (1700-1748). Kleist searched for a way to store electric energy and had the idea to fill it into a bottle! The bottle contained water or mercury and was placed onto a metal surface with ground connection. No wonder: the device worked, but not because of the fact that electricity could be filled into bottles.One year after Kleist, the physicist Cunnaeus in Leyden/the Netherlands independently invented this bottle again. Thus the term Leyden Jar became more familiar, although in Germany, this device sometimes also was called Kleist's bottle.
An intense research work began to find out which liquid is the most suitable. A few years later, researchers had learned that water is not necessary, but a metal hull inside and outside the jar was sufficient for storing electrostatic energy. Thus the first capacitors were born.
Early Leyden jarsAn advanced electrostatic battery in 1795

Frequently, several jars were connected in order to multiply the charge. Experimenting with this type of capacitors started to become pretty dangerous. In 1783, while trying to charge a battery during a thunderstorm, Prof. Richmann was killed by unintendedly getting too close to a conductor with his head. He is the first known victim of high voltage experiments in the history of physics. Benjamin Franklin had a good deal of luck not to win this honour when performing his kite experiments.
St. Petersburg, 6 August 1783. Prof. Richman and his assistant being struck by lightning while charging capacitors. The assistant escaped almost unharmed, whereas Richman was dead immediately. The pathologic analysis revealed that "he only had a small hole in his forehead, a burnt left shoe and a blue spot at his foot. [...] the brain being ok, the front part of the lung sane, but the rear being brown and black of blood." The conclusion was that the electric discharge had taken its way through Richmann's body. The scientific community was shocked.

 The Disk Rotor

Generators based on disks were invented around 1800 by Winter. Their characteristic construction element is a mercury-prepared leather cushion that covers approximately one forth of the surface area. The leather cushion replaced the experimentor's hand and gave a more continous result. In 1799, first experiments of electrolysis by electrostatic energy were made. It turned out that the recently invented chemical elements caused same or better effect than many thousand electric discharges of a Leyden bottle battery. Experiments like these helped to shape the understanding of electric energy.

An early disk generator by Winter

 The Advanced Rotor

Inventors found out that it is a good idea to laminate metal or cardboard sheets onto the isolating disks of electrostatic generators.

The so-called influence machine by Holtz, 1865
Disks for advanced generators of this type were made of glass, shellac and ebonite (hard rubber). Especially hard rubber turned out to be a very suitable material as it did not get damaged so easily than glass or shellac.

 The Wimshurst Machine

Wimshurst machines are the end point of the long development of electrostatic disk machines. They caused very good results and were frequently used to power X-ray tubes. The characteristic construction element of these machines are disks which are laminated with radially arranged metal sheets. The advantage of disks is that they can be stacked onto one axle in order to multiply the effect.

A Wimshurst machine around 1905.
The end point of a long development.
The invention of the electomagnetic inductor by Ruhmkorff in 1857 began to make the electrostatic disk machines obsolete. Today, both devices only serve as useful demonstration objects in physics lessons to show how electric charges accumulate. For technical applications, high voltages can be easier generated by electronic and electromagnetic methods.

A Ruhmkorff inductor to power an X-ray tube (1910)

 The Van-DeGraaff Generator

The principle of this machine is to transport voltage by the aid of a tape made of isolating flexible material e.g. rubber. Early in the development of machinery, it was observed that mechanical transmission belts gave reason for unintended high voltage production, which harmed persons or buildings by igniting parts of a workshop. The same effect caused by transporting the highly inflammable celluloid films inside the projector was the reason for more than one cinema perishing in fire.

A 5 Megavolt Van-deGraaff generator
The principle is based on an isolating endless tape which transports an electric charge to a conductor. Although the device can be operated without an additional electric power source, normally a DC high voltage is applied to the tape, thus considerably increasing the output voltage. Van DeGraaff generators are still in use in particle accellerator labs. The largest machines produce up to 10 million Volts.

 The Steam Electrostatic Generator

Wet steam which is pressed through a nozzle causes electric chargement. This was the origin of the idea to construct an electrostatic generator based on steam. Although these machines caused good results, they were difficult to maintain. As they also were expensive, comparatively few were built and have survived in museum collections.

A steam electrostatic generator

Conclusion

Electrostatic generators have their place in the history of science. They accompanied the way to understand electricity. However, their efficiency is poor, compared to the mechanical effort which is needed to produce electrical energy. In this context, I'd like to seriously warn all would-be inventors of electrostatic PMMs based on disk rotors or on the Van deGraaf principle. Machines of this type are no toy and even small devices can be dangerous if carelessly handled. As a rule of thumb, a charged Leyden jar of 1/2 liter (=1/8 gallon) volume can endanger your life!