ELECTRICAL SYSTEM IN A VEHICLE

The main function of the electrical system in a vehicle is to generate, store, and supply the electric current

to various systems of a vehicle.

It operates the electrical components/parts in vehicles.

Most components of the earlier-generation vehicles were predominantly mechanical in nature and operation.

Over the amount of your time,

these components started operating electrically/electronically;

shedding their pure mechanical function which the sooner vehicles used.

Nowadays,

the bulk of vehicular systems have an electrical function for simple operation and precision control.

Even more advanced steering systems like electrical power Assisted Steering (EPAS) also operate by electrical power.

Hence,

the engineers felt the necessity for consistency within the generation of electrical power.

So,

they employed different mechanisms to effectively generate, regulate, store, and provide the electrical current within the vehicles.

Negative Earth:

Earlier generation cars mostly used the positive ground in their electrical system.

During this system,

the positive terminal of the battery was attached to the chassis while the negative terminal was live.

However, later this technique was discontinued.

Today,

modern cars make use of the negative earth in their electrical system.

Generally, most cars use the 12 Volts electrical system.

However,

some small bikes still use the 6 Volts system

whereas some commercial vehicles use the 24 Volts system.

The vehicle electrical system consists of the subsequent main components:

  • Magneto
  • Generator
  • Alternator
  • Cut Out/Voltage Regulator
  • Battery
Vehicle Electrical System: Magneto

Magneto is an electrical device that generates periodic pulses of AC.

However, it uses permanent magnets

. The magneto doesn’t have a ‘commutator’ which produces the DC (DC) sort of a Dynamo.

Manufacturers classify the magneto as a kind of an alternator.

However,

it’s different from other alternators that use field coils rather than permanent magnets.

The magneto has the following parts:
  • Set of permanent magnets
  • Coil
  • Cranking mechanism (Usually a kick during a motorcycle)

Thus,

magneto converts the energy of the engine into electricity to run the engine uninterruptedly.

Magneto’s magnetic flux strength is constant.

the most advantage of the magneto is that its output is steady no matter load variations.

However,

if the engine shuts down, then it again needs an external input to restart.

Today,

the utilization of such magnetos for ignition is extremely limited.

However,

there are a couple of motorcycles, small bikes, and quads that still use the magneto system.

the most advantage of this technique is reduced weight.

Initially, you would like the input from the battery to start out the engine.

Then,

the magneto generates electricity from the input of the energy.

Vehicle Electrical System: Dynamo/Generator

A Dynamo/Generator may be a device that converts energy into electricity.

It supplies the electricity for charging the battery of a vehicle.

The generator gets the drive from the engine, generally thru’ the belt.

In earlier generation vehicles, you’ll see this sort of arrangement.

The speed of the generator largely depends on the speed of the engine.

because the engine speed increases; so does the speed of the generator.

It varies to an excellent extent throughout the engine’s speed like its power-band.

However,

things demands that the generator output should remain nearly constant.

Also, another name for the automotive Generator is Dynamo.

Furthermore, the automotive generator produces DC (DC).

this is often because the electrical components need the DC to function.

Automotive applications most ordinarily use the Generator made from shunt winding.

Initially, manufacturers employed generators to supply DC (DC) which the opposite electrical components/gadgets could directly use/consume.

However, now, the Generator is replaced by the Alternator which generates AC (AC).

it’s then converted into the DC (DC) with the assistance of diodes.

The main components of a generator are:
  • Frame
  • Armature
  • Field coils

Vehicle Electrical System: Alternator

The Alternator is additionally referred to as the AC Generator.

it’s a tool that produces an AC (AC) rather than DC (DC).

Hence, it’s referred to as an Alternator and works on an equivalent principle.

within the early 60s, the alternator replaced the DC Generator due to its distinct advantages over the latter.

However, the automotive electrical system only uses the DC.

So, you would like a mechanism to convert the AC to DC.

An alternator converts the AC (AC) to DC (DC) with the assistance of diodes.

The main components of an alternator are:
  • Frame or housing
  • Rotor (with electromagnets)
  • Stator
  • Slip ring and bushes

Vehicle Electrical System: Cut-Out Relay

The Cut-Out mechanism regulates and cuts out the present output getting to the battery.

When the engine is running at very slow speeds,

the generator output is typically less than the battery output voltage of 12 volts.

Hence, it’s insufficient to charge the battery.

In such a scenario, the battery starts to empty out into the generator because the battery voltage is above the generator output.

to stop the battery from draining off, manufacturers employ a voltage regulator/Cut-Out. It connects/disconnects the generator from the battery.

When the generator output is less than battery voltage,

then it disconnects the generator from the battery,

whereas when the output is higher, it connects the generator back to the battery.

Thus, it prevents the battery from discharging at slow engine speeds.

Vehicle Electric System: Battery

The main purpose of A battery is to store the electricity within the DC form for future use.

A car or motorcycle battery is simply like all other battery that has two poles: positive and negative.

Modern cars use the negative earth technology.

The positive pole represents the South Pole while the negative pole represents the North Pole .

The positive terminal is usually bigger in diameter than the negative terminal. this is often to stop it from being potentially fitted in a wrong way.

Electric Vehicles use more advanced type ‘Lithium-Ion’ or ‘Li-Ion’ batteries.

These batteries can store more current and take less time to charge compared to standard batteries.

Li-ion batteries have high energy density and low self-discharge properties. Hence, they provide long hours of operation before needing the re-charge.

What is a carburetor?

WANKEL ENGINE

Maximum efficiency

 

WANKEL ENGINE

wankel-engine

It works on a similar principle on the otto cycle.

It consists of three lobes rotor, casing, spark plug, suction, and exhaust ports.

The rotor of the engine is driven eccentrically in the casing in such a way that there are three separate volume trapped between the rotor and casing.

 

The volume trapped in each lobe performs the function of suction, compression, ignition, combustion, expansion, and exhaust processes.

Therefore,

we get three power strokes in one revolution of the rotor

WANKEL ENGINE

https://youtu.be/josJhz8VS8A

In the case of a four-stroke I.C. engine, we get one power stroke in two revolutions of the crankshaft,

Thus,

the Wankel engine develops six times the power for the same capacity of cylinder compared to reciprocating I.C .engines.

Intake:-

When a tip of the rotor passes the intake port, a fresh mixture starts getting into the primary chamber. The chamber draws fresh air until the second apex reaches the intake port & closes it. At the instant, the fresh air-fuel mixture is sealed into the primary chamber & is being removed for combustion.

Compression:-

The chamber one(between corner 1 to corner 2) containing the fresh charge gets compressed

to the form of the engine by the time it reaches to sparking plug.
While this happens, a replacement mixture starts getting into the second chamber(between corner 2 to corner 3).

Combustion:-

When the sparking plug ignites, the highly compressed mixture expands explosively. The pressure of expansion pushes the rotor within the forward direction. This happens until the primary corner passes through the exhaust port.

Exhaust:-

As the peak OR corner 1 passes the exhaust port, the recent high-pressure combustion gases are liberal to effuse of the port.
As the rotor continues to maneuver, the quantity of the chamber goes on decreasing forcing the remaining gases out of port. By the time the corner 2 closes the exhaust port, corner 1 passes by the intake port repeating the cycle.
While the primary chamber is discharging gases, the second chamber(between corner 2 to corner 3) is under compression. Simultaneously, chamber 3(between corner 3 to corner 1) is drawing a fresh mixture.
This is the sweetness of the engine – the four sequences of the four-stroke cycle, which occur consecutively during a piston engine, occur simultaneously within the Wankel engine, producing power during a continuous stream.

What is carburetor?

Maximum efficiency

 

 

 

American Space Mission look to work with ISRO

American Space Mission look to work with ISRO

For the primary time within the history of Latin American Space travels,

a crew of made up only Latin Americans are going to be going onboard.

The first Latcosmos mission has been promoted by the Ecuadorian agency EXA,

which can provide the funds for the primary space trip.

The crew which can participate is being commanded by Commander Ronnie Nader, Exa Ecuador, and Adolfo Chaves, TEC Costa Rica; Alberto Ramírez, UNAM Mexico and Margot Solberg, US.

Two astronauts –

interacted with Financial Express Online and talked about their American Space  mission look to work with ISRO   and

how they can collaborate with the Indian Space Research Organisation (ISRO) in the future.

Dr Adolfo Chaves Jiménez,

Researcher Coordinator,

Space Systems Engineering Laboratory (SETEC Lab) School of Electronics Costa Rica Institute of Technology,

has been chosen to visit space as part of the first Latin American space mission in history.

Mission specialist astronaut, commander Ronnie Nader, Ecuadorian Civil Space Agency (EXA)/FAE,

is that the person in history to realize the 2 most vital milestones in astronautics for a rustic?

He is the first astronaut and also the father of its first satellites

and at the same time is the only Ecuadorian representative

to the International Astronautical Federation (IAF) General Assembly.

He is also the primary and

only Ecuadorian citizen to be elected

as a member of the International Academy of Astronautics,

and a member of the American Association for the Advancement of Science (AAAS).

Following are excerpts of an interaction with Dr. Adolfo Chaves Jimenez

who talks about the mission and how ISRO and Costa Rica can work together in the space sector:

What is the Irazú Project (the first Central American satellite)?

I got in contact with the Central American Society for Aeronautics and Space (ACAE)

because I wanted to participate in the project.

They asked me to be their first project manager.

Finally,

with the assistance of tons of individuals,

Because, the Costa Rica Institute of Technology (TEC) became the partner of ACAE, As a result, responsible for the technological development of the satellite.

What is the significance of the mission which will be in a New Sheppard rocket,

from Blue Origin?

The idea of the mission is to demonstrate. so, Latin America can undertake joint space missions.

as a result,The support of EXA has been fundamental to this.

Through the Latin American and therefore the Caribbean Group (GRULAC) of the International Astronautical Federation,

because they’re offering us this chance.

As a coordinator of the Space Systems Laboratory (SETEC-Lab) of the Tecnológico de Costa Rica (TEC),

part of the suborbital trip in the mission called ESAA-01 EX SOMINUS AD ASTRA.

This is part of the Latcosmos-C program.

And, This is one of the various steps that a lot of people of Costa Rica have taken to market Space

as a tool for development and inspiration.

ALSO CHECK – Truck Refrigeration market Advanced Technologies

ALSO  –

 Software-defined satellite

ALSO  –The Space Force’s relevance to the green agenda.

ALSO  – The Exotic Behavior of Matter in the middle of Jupiter

 

 

 

 

 

Truck Refrigeration market Advanced Technologies

Truck Refrigeration market Advanced Technologies

Truck refrigeration is that the process of keeping the materials condensed inside the truck.

it helps to move the commodities which are perishable,

and temperature-sensitive.

It helps in protecting the merchandise from contact of out of doors temperature, dirt, and other harmful particles. These trucks are equipped with a mechanical cooling system,

powered by small displacement diesel engines,

or utilize CO2 as a cooling agent.

it’s utilized in transporting meat, fish, grocery, foodstuff s, and other perishable items.

Top Companies Covered during this Report:

3M Company, American Durafilm, Covestro AG, E. I. du Pont de Nemours and Company, Eastman Chemical Company, Evonik Industries, Honeywell International Inc., Sealed Air, Solvay S.A.,

The Dow Chemical Company

Get a sample copy of the Report at:
https://www.premiummarketinsights.com/sample/TIP00014917

What is the Dynamics of the Truck Refrigeration Market?

The global truck refrigeration market is growing at a big pace thanks

to driving factors like increase of use in food and beverage industries due to its preventive nature.

Furthermore,

Increasing shelve lifetime of the perishable has made the businesses to take a position more during this market,

which has significantly acted as an element of driving the truck refrigeration market.

However,

the high cost of capital and a high level of technicalities in transportation is projected

to hinder the expansion of the truck refrigeration market.

Likewise,

the increase of consumption in the food and beverage industry

alongside the expansion of applicability in various industries may provide a lucrative opportunity for the market players

within the near future.

What is the SCOPE of the Truck Refrigeration Market?

The “Global Truck Refrigeration marketing research to 2027” may be a specialized

and in-depth study of the chemicals and materials industry

with a special specialize in the worldwide market analysis.

The report aims to supply a summary of the truck refrigeration market with detailed market segmentation by material,

type, end-user industry, and geography.

the worldwide truck refrigeration market – predicted to witness high growth during the forecast period.

The report provides key statistics on the market status of the leading truck refrigeration

market players and offers key trends and opportunities

within the market.

What is the Market Segmentation?

The global truck refrigeration market – segmented on the idea of fabric,

type,

and end-user industry. On the idea of auto type, the truck refrigeration market is segmented into, L&MCV and HCV. On the idea of application type, the market is bifurcated into, meat & fish, grocery, dairy product, Others. supported end-user industry, the worldwide truck refrigeration market is segmented into, food, pharmaceutical, industry, plants/ flowers.

What is the Regional Framework of Truck Refrigeration Market?

The report provides an in-depth overview of the industry including both qualitative and quantitative information. It provides a summary and forecast of the worldwide truck refrigeration market that supported various segments. It also provides market size and forecast estimates from the year 2018 to 2027 with reference to five major regions,

namely; North America, Europe, Asia-Pacific (APAC), Middle East and Africa (MEA), and South America. The truck refrigeration market-

by each region – later sub-segmented,

by respective countries and segments. The report covers the analysis and forecast of 18 countries globally alongside the present trend and opportunities prevailing

within the region.

ALSO CHECK-

MAGNETIC REFRIGERATION

Software-defined satellite

 

Software-defined satellite

Software-defined satellite” – an emerging technology in the space industry

The termsoftware-defined satellite has already appeared in the space industry and related media,

but for the purpose of clarity of this article,

defined as follows:

instead of viewing a satellite as a monolithic piece of hardware and software,

designed to perform a specific mission,

one can see the same satellite as a platform capable of running multiple different missions (defined as software applications)

on the same hardware platform.

This definition follows the same approach as other “software-defined” entities, such as

software-defined radio transceivers that can be reconfigured for a variety of RF tasks

software-defined networking appliances that can support a wide range of telecommunications applications.

In this similar manner,

implementing satellite missions in software can offer a number of advantages,

described in detail further below.

The primary advantage of using “software-defined” solutions is the opportunity to reuse one satellite for multiple applications for multiple users.

While the nature of applications is defined by the instruments available for the users,

the common Earth observation and communications ones,

such as imaging cameras and spectrometers already allow the wide range of different usage scenarios.

Currently, any party that is interested in deploying any kind of satellite in space,

they have to go to the multi-step process of designing the satellite itself,

finding a launch or mission provider,

building or buying the necessary hardware, obtaining the regulatory permits and telecom licenses, and so on.

Multi-year and multi-decade projects are common in the space industry.

But with the “software-defined” approach,

deployment of software code to an existing satellite can be done over a single day

And operations can begin immediately afterward.

Low cost.

The space industry is one of the most capital-intensive areas of the global economy.

The growth of the CubeSat segment and the growing availability of satellite data lowered the barriers of entry for small companies and solo entrepreneurs,

but in-space activities remain outside the reach of an average software developer.

Using the model where multiple satellite missions can share access to resources of the single satellite and applying the “pay-per-use” billing model to the users,

a lot more people would be able to afford direct participation in the upstream space segment.

In a similar manner,

access to space technologies is often behind the industry or government barriers,

often requiring security clearance or being a citizen of select few countries with well-established space agency and aerospace industry.

By comparison, modern software development is a lot more open and accessible to the global community of programmers.

By taking the same approach,

satellite mission development and operations can become a lot more accessible

therefore allow a lot more business concepts to be implemented and tested in the environment of a real space mission.

Platform-independence.

Another important advantage of making satellite mission software-defined is removing dependency on the specific hardware.

This allows the creation of platform-independent,

portable application packages that can be reused on multiple satellite platforms,

provided there is enough compatibility between the models in the family.

Such a development will mirror the history of terrestrial computers,

which evolved from unique pieces that could only run software designed for their own architecture to modern systems that support software that can run in native,

platform-independent, and virtualized environments.

Future opportunities.

The biggest advantage of utilizing a “software-defined” approach to satellite development will be the hardest one to predict.

The benefits of “software-defined satellites” can go far beyond the ability to reconfigure a single satellite for multiple customers and multiple missions.

Opening up an entirely new domain for independent developers may create the same boom of new applications as the creation of the World Wide Web or modern smartphones.

Once all the infrastructure to provide low-cost and low-friction software deployment on a space-based platform will be in place,

the new breakthroughs will surely follow.

Also, check- The Space Force’s relevance to the green agenda.

The Exotic Behavior of Matter in the middle of Jupiter

 

The Space Force’s relevance to the green agenda.

The Space Force’s relevance to the green agenda.

 When most Americans believe Space Force, they probably imagine epic space battles or sprawling fantasy sagas. Policymakers who are more “in the know” likely believe the duties and functions which will preoccupy

the U.S. military’s newest branch within the years ahead.

But few, if any, pause to think about that the USSF has the potential

to play in another arena as well:

that of global climate change.

this is often because, while most don’t realize it,

The Space Force’s relevance to the green agenda is positioned to be among the foremost powerful organizations enabling and advancing a worldwide green agenda.

After all, it’s the USSF that operates the worldwide positioning system (GPS),

one of the world’s most powerful green technologies.

Since its advent within the 1970s, GPS-enabled navigation has facilitated global sea, land, and air transportation

And reduced global fuel expenditures by between 15 and 21 percent.

That figure dwarfs the incremental gains now being sought by advocates of reduced carbon emissions

And makes the USSF the operator of the world’s most powerful green technology.

But the service is additionally doing more during this domain.

The USSF, as an example, is taking

the lead on what is going to become the last word green energy technology:

space-based solar energy.

Ignored for many years by both NASA and therefore the Department of Energy,

space-based solar energy is exclusive as a renewable energy source

because it’s much more efficient than its terrestrial counterpart and requires much less land. Moreover, its vast availability would allow a mature system to satisfy current global demand repeatedly over.

By delivering power on to where it’s needed,

space-based solar energy — once mature — would enable us to supply developing nations with a non-combustion energy source,

substantially reducing the impact of economic development on the environment. It could likewise enable rural electrification, obviating the necessity for carbon-intensive cooking practices like burning wood and trash.

And, since it eliminates

the necessity for miles of forest-disrupting roads and power lines,

it could even be wont to make water,

alleviating scarcity and suffering for millions.

Just as GPS began as military research but broadened to become a worldwide utility,

so too could current research at some point unlock a carbon-free energy source capable of meeting

one hundred pc of worldwide demand. And it’s the Space Force that’s pioneering its development.

Yet, there’s still more.

The USSF is additionally at the middle of climate intelligence,

helping us to understand both about our weather patterns on Earth,

and about the space weather — the activity of the Sun — which impacts our biosphere.

There wouldn’t even be a worldwide green movement had it not been for early military space research

to photograph our weather,

which gave us our first view of our planet within the 1960s.

Some six decades later, U.S. Space Force weather satellites still give us knowledge critical to understanding our climate and to managing our impact thereon.
The Space Force also plays a pivotal role in protecting the space environment itself. It provides traffic alerts to stop satellite collisions (and therefore space debris),

and it helps to develop norms of behavior that regulate the space information services which increasingly monitor our terrestrial environment.

Militaries are, of course, concerned about climate security and human security.

Yet their first focus — and therefore the one driving all of those innovations — is national security.

As is that the case with most tools and technology,

something built for one purpose finishes up being useful for other purposes. Military space technology has and can still advance the safety of Earth’s climate and biosphere. It also can help us to secure a far better, and greener, future.

Peter Garretson may be a senior fellow in Defense Studies with the American policy Council

a technology consultant who focuses on space and defense.

He was previously the director of Air University’s Space Horizons Task Force,

America’s think factory for space, and was deputy director of America’s premier space strategy program, the Schriever Scholars. All views are his own.

 

The Exotic Behavior of Matter in the middle of Jupiter

Transistor-Integrated Microfluidic Cooling

 

The Exotic Behavior of Matter in the middle of Jupiter

The Exotic Behavior of Matter in the middle of Jupiter

The atom, with its single proton orbited by one electron, is arguably the only material out there. Elemental hydrogen can nonetheless exhibit extremely complex behavior — at megabar pressures,

for instance, it undergoes a transition from being an insulating fluid to being a metallic conductive fluid.

While the transition is fascinating simply from the purpose of view of condensed matter physics and materials science — liquid-liquid phase transitions are rather unusual

it also has significant implications for planetary science,

since liquid hydrogen makes up the inside of giant planets like Jupiter and Saturn also as brown dwarf stars.

Understanding the liquid-liquid transition is then a central part of accurately modeling

the structure and evolution of such planets and standard models generally assume a pointy transition between the insulating molecular fluid

therefore the conducting metallic fluid.

This sharp transition is linked to a discontinuity in density

thus a transparent border between an inner metallic mantle and an outer insulating mantle in these planets.

While scientists have made considerable efforts to explore and characterize this transition also as dense hydrogen’s many unusual properties

including rich and poorly understood solid polymorphism, anomalous melting line,

therefore the possible transition to a superconducting state laboratory investigation is complicated

due to the necessity to make a controllable high and temperature environment also on confine hydrogen during measurements.

Experimental research has then not yet reached a consensus

on whether the transition is abrupt or smooth and different experiments have located

the liquid-liquid transition at pressures that is the maximum amount as 100 gigapascals apart.

“The quite experiment that you simply got to be ready to do to study a cloth within the same range of pressures

that you find on Jupiter is very non-trivial”

” Ceriotti said.

“As a result of the constraints, many various experiments are performed,

with results that are very different from one another .”

Though modeling techniques introduced within the last decade have allowed scientists to raised understand the system,

the large computational expense involved in essentially solving the quantum mechanical problem for the behavior of hydrogen atoms has meant that these simulations were necessarily limited in time,

to a scale of a couple of picoseconds, and to a scope of just a couple of hundred atoms.

The results here have also been mixed.

In order to look at the matter more thoroughly, Ceriotti and colleagues Bingqing Chen at the University of Cambridge and Guglielmo Mazzola at IBM Research Zurich used a man-made neural specification

to construct a machine learning potential.

supported a little number of very accurate (and time-consuming) calculations of the electronic structure problem,

the cheap machine-learning potential allowed for the investigation of hydrogen phase transitions for temperatures between 100 and 4000 K, and pressures between 25 and 400 gigapascals,

with converged simulation size and time.

The simulations, mostly run on EPFL computers at SCITAS, took just a couple of weeks compared with the 100s of many years in CPU time

that it might have taken to run traditional simulations for solving the quantum mechanical problem.

The resulting theoretical study of the phase diagram of dense hydrogen allowed the team to breed the re-entrant melting behavior.

therefore the polymorphism of the solid phase.

Simulations supported the machine learning potential showed, contrary to the common assumption that hydrogen undergoes a first-order phase change, evidence of continuous metallization within the liquid.

This successively not only suggests a smooth transition between insulating and metallic layers in giant gas planets, it also reconciles existing discrepancies between both lab and modeling experiments.

“If high-pressure hydrogen is supercritical, as our simulations suggest,

there’s no sharp transition

where all the properties of the fluid have a sudden jump,” Ceriotti said.

“Depending on the precise property you probe,

therefore the way you define a threshold, you’d find the transition to occur at a special temperature or pressure.

this might reconcile a decade of controversial results from high experiments.

Different experiments have measured slightly various things

that they haven’t been ready to identify the transition at an equivalent point

because there’s no sharp transition.”

In terms of reconciling their results with some earlier modeling that indeed identified a pointy transition,

Ceriotti says that they might only observe a clear-cut jump in properties

when performing small simulations, which in those cases they might trace the jump to solidification,

instead of to a liquid-liquid transition.

The sharp transition should be observed then preferably be understood

as an artifact of the restrictions of using simulations supported traditional physics-based modeling.

The machine learning approach has allowed

the researchers to run simulations

that are typically between 4 and 10 times larger and a number of other 100s of times longer.

this provides them a way better overview of the whole process.

While it had been applied during this particular paper to a problem linked to planetary science, an equivalent technology is often applied to any problem in materials science or chemistry,

Ceriotti said.

“This may be a demonstration of a technology

that permits simulations to urge into a regime that has been impossible to succeed in,” Ceriotti said.

“The same technology that we could use to know better

the behavior of planets also can be wont to design better drugs or more performing materials.

There really is that the potential for a simulation-driven change of the way

we understand the behavior of every day, also as exotic, matter.”

Transistor-Integrated Microfluidic Cooling

Black Hole Plasma Conditions Created on Earth

 

Transistor-Integrated Microfluidic Cooling

Transistor-Integrated Microfluidic Cooling for More Powerful Electronic Chips

Managing the warmth generated in electronics may be a huge problem, especially with the constant push

to scale back the dimensions and pack as many transistors as possible within the same chip.

the entire problem is the way to manage such high heat fluxes efficiently.

Usually electronic technologies, designed by electrical engineers, and cooling systems, designed by mechanical engineers, are done independently and separately.

But now EPFL researchers have quietly revolutionized

the method by combining these two design steps into one:

they’ve developed an integrated microfluidic cooling technology alongside the electronics, which will efficiently manage

the massive heat fluxes generated by transistors.

Their research, which has been published in Nature, will cause even more compact electronic devices

and enable the mixing of power converters, with several high-voltage devices, into one chip.

The best of both worlds

In this ERC-funded project, Professor Elison Mattioli, his doctoral student Remco Van Erp, and their team from the varsity of Engineering’s Power and Wide-band-gap Electronics lab (PowerLab)

began working to cause a true change in mentality

when it involves designing electronic devices, by conceiving the electronics and cooling together, right from the start,

getting to extract the warmth very near the regions

that heat up the foremost within the device.“We wanted to mix skills in electrical and engineering

so as to make a replacement quite a device,” says Van Erp.

The team was looking to unravel the difficulty of the way to cool electronic devices, and particularly transistors.

Managing the warmth produced by these devices is one of the most important challenges in electronics going forward,

” says Elison Mattioli.

“It’s becoming increasingly important to attenuate the environmental impact,

so we’d like innovative cooling technologies which will efficiently process

the massive amounts of warmth produced during a sustainable and cost-effective way.”

Microfluidic channels and hot spots  for  Transistor-Integrated Microfluidic Cooling

Their technology is predicated on integrating microfluidic channels inside the semiconductor chip, alongside the electronics, so a cooling liquid flows inside an electronic chip.

“We placed microfluidic channels very on the brink of the transistor’s hot spots, with an easy and integrated fabrication process, in order that we could extract the warmth in just the proper place and stop it from spreading throughout the device,” says Mattioli.

The cooling liquid they used was deionized water, which doesn’t conduct electricity. “We chose this liquid for our experiments, but we’re already testing other, simpler liquids in order that we will extract even more heat out of the transistor,” says Van Erp.

Reducing energy consumption

“This cooling technology will enable us to form electronic devices even more compact and will considerably reduce energy consumption around the world,” says Mattioli. “We’ve eliminated the necessity for giant external heat sinks and shown that it’s possible to make ultra-compact power converters during a single chip. this may prove useful as society becomes increasingly reliant on electronics.” The researchers are now watching the way to manage heat in other devices, like lasers and communications systems.

Black Hole Plasma Conditions Created on Earth

MATTER OF LIGHT BY HADRON COLLIDER

 

 

Black Hole Plasma Conditions Created on Earth

Magnetic reconnection is generated by the irradiation of the LFEX laser into the micro-coil. The particle outflow accelerated by the magnetic reconnection is evaluated using several detectors. As an example of the results, proton outflows with symmetric distributions were observed

Scientists at Osaka University use extremely intense laser pulses to make magnetized-plasma conditions like those surrounding a region, the study which will help explain the still mysterious X-rays that can be emitted from some celestial bodies

Black Hole Plasma Conditions Created on Earth

Laser Engineering at Osaka University have successfully used short,

but extremely powerful laser blasts to generate magnetic field reconnection inside a plasma.

This work may cause a more complete theory of X-ray emission from astronomical objects like black holes.

In addition to extreme gravitational forces,

the matter being devoured and by a black hole can be also be pummeled,

by intense heat and magnetic fields.

Plasmas, the fourth state of matter hotter than solids, liquids, or gasses, are made from electrically charged protons and electrons

that have an excessive amount of energy to make neutral atoms.

Instead, they bounce frantically in response to magnetic fields.

One of the world’s largest petawatt laser facility, LFEX, located in the Institute of Laser Engineering at Osaka University.
One of the world’s largest petawatt laser facility, LFEX, located in the Institute of Laser Engineering at Osaka University. Credit: Osaka University

 

Within a Black Hole Plasma Conditions Created on Earth,

magnetic reconnection may be a process during which twisted magnetic flux lines suddenly “snap”

and cancel one another,

leading to the rapid conversion of magnetic energy into particle kinetic energy.

In stars, including our sun, reconnection is liable for much of the coronal activity, like solar flares.

Owing to the strong acceleration, the charged particles within the black hole’s accretion disk emit their own light,

usually within the X-ray region of the spectrum.

To better understand the method that provides rise to the observed X-rays coming from black holes,

scientists at Osaka University used intense laser pulses to make similarly extreme conditions in the lab.

“We were ready to study the high-energy acceleration of electrons and protons because of the results of relativistic magnetic reconnection,”

Senior author Shinsuke Fujioka says. “For example, it is easy to understand the origin of emission from the famous region Cygnus X-1,”

The magnetic field generated inside the micro-coil (left), and the magnetic field lines corresponding to magnetic reconnection (right) are shown. The geometry of the field lines changed significantly during (upper) and after (lower) reconnection.

This level of sunshine intensity isn’t easily obtained, however.

For a quick instant, the laser required two petawatts of power, like one thousand-fold

the electrical consumption of the whole globe.

With the LFEX laser,

the team was ready to achieve peak magnetic fields with a mind-boggling 2,000 teslas.

For comparison,

the magnetic fields generated by an MRI machine to supply diagnostic images are typically around 3 teslas,

Earth’s magnetic flux may be a paltry 0.00005 teslas.

The particles of the plasma are accelerated to such an extreme degree that relativistic effects needed to be considered.

“Previously, relativistic magnetic reconnection could only be studied via numerical simulation on a supercomputer. Now, it’s an experimental reality during a laboratory with powerful lasers,” first author King Fai Farley Law says.

The researchers believe that this project will help elucidate

the astrophysical processes which will happen at places within the Universe that contain extreme magnetic fields.

Also, check- MATTER OF LIGHT BY HADRON COLLIDER

FUNDING FOR CARBON CAPTURE TECHNOLOGIES

THE MOON IS GETTING RUSTY FROM EARTH

 

MATTER OF LIGHT BY HADRON COLLIDER

LHC

Scientists on an experiment at the MATTER OF LIGHT BY HADRON COLLIDER see massive W particles emerging from collisions with electromagnetic fields. How can this happen?

The Large Hadron Collider plays with Albert Einstein’s famous equation, E = mc²,

to rework matter into energy then back to different sorts of matter.

But on rare occasions, it can skip the first step and collide pure energy—in the form of electromagnetic waves.

Last year, the ATLAS experiment at the HADRON COLLIDERobserved two photons, particles of sunshine, ricocheting off each other

and producing two new photons.

matter of light

This year, they’ve taken that research a step further and discovered photons merging and reworking into something even more interesting:

W bosons, particles that carry the weak interaction, which governs nuclear decay.

This research doesn’t just illustrate the central concept of governing processes inside the  HADRON COLLIDER:

that energy and matter are two sides of an equivalent coin.

It also confirms that at high enough energies, forces that appear separate in our everyday lives—electromagnetism

therefore the weak force—are united.

From massless to massive

If you are trying to duplicate this photon-colliding experiment reception by crossing the beams of two laser pointers,

you won’t be ready to create new, massive particles.

Instead, you’ll see the 2 beams combine to make a good brighter beam of sunshine.

“If you return and appearance at Maxwell’s equations for classical electromagnetism,

you’ll see that two colliding waves sum up to a much bigger wave,” says Simone Pagan Griso,

a researcher at the US Department of Energy’s Lawrence Berkeley National Laboratory. “We only see these two phenomena recently observed by ATLAS once we put together,

LHC EXPERIMENT

Maxwell’s equations with the special theory of relativity and quantum physics within the so-called theory of QED .”
Inside CERN’s accelerator complex, protons are accelerated on the brink of the speed of sunshine. Their normally rounded forms squish along the direction of motion as the special theory of relativity supersedes

the classical laws of motion for processes happening at the LHC.

The two incoming protons see one another as compressed pancakes amid an equally squeezed electromagnetic field

(protons are charged, and everyone charged particles have an electromagnetic field).

The energy of the LHC combined with the length contraction boosts

the strength of the protons’ electromagnetic fields by an element of 7500.

When two protons graze one another, their squished electromagnetic fields intersect. These fields skip the classical “amplify” etiquette that applies at low energies and instead follow the principles outlined by QED. Through these new laws, the 2 fields can merge and become the “E” in E=mc².

“If you read the equation E=mc² from right to left,

you’ll see that a little amount of mass produces an enormous amount of energy due to the c² constant,

which is the speed of sunshine squared,” says Alessandro Tricoli,

a researcher at Brookhaven National Laboratory—the US headquarters for the ATLAS experiment, which receives funding from DOE’s Office of Science.

“But if you look at the formula the other way around,

you’ll see that you need to start with a huge amount of energy

to produce even a tiny amount of mass.”

The LHC is one of the few places on Earth which will produce and collide energetic photons,

and it’s the sole place where scientists have seen two energetic photons merging and transforming into massive W bosons.

Unification of forces

The generation of W bosons from high-energy photons exemplifies,

the discovery that won Sheldon Glashow, Abdus Salam, and Steven Weinberg the 1979 Nobel Prize in physics:

At high energies, electromagnetism and the weak interaction are one within the same.

Electricity and magnetism often feel like separate forces. One normally doesn’t worry about getting shocked while handling a refrigerator magnet. And light bulbs, even while lit up with electricity, don’t stick with the refrigerator door. So why do electrical stations sport signs warning about their high magnetic fields?

“A magnet is one manifestation of electromagnetism, and electricity is another,” Tricoli says.

“But it’s all electromagnetic waves and that we see this unification in our everyday technologies,

like cell phones that communicate through electromagnetic waves.”

At extremely high energies, electromagnetism combines with yet one more fundamental force: the weak interaction. The weak interaction governs nuclear reactions, including the fusion of hydrogen into helium that powers the sun

therefore the decay of radioactive atoms.

Just as photons carry the electromagnetic force, the W and Z bosons carry the weak interaction. The reason photons can collide and produce W bosons within the LHC is that at the very best energies,

those forces combine to form the electroweak force.

“Both photons and W bosons are force carriers, and that they both carry the electroweak force,” Griso says. “This phenomenon is basically happening because nature is quantum mechanical.”

ALSO, CHECK- ANNOUNCEMENT BY US DEPARTMENT FOR FUNDING CARBON CAPTURE TECHNOLOGIES

THE MOON IS GETTING RUSTY FROM EARTH

MOON LANDING FACTS