Introduction to Hypernetworks in AI By Qlael Practicalintroduction

practicalintroduction.com


Introduction to Hypernetworks At AI Supremacy we pride ourselves (well it’s just me for now) on trying to cover some of the academic news Introduction to Hypernetworks in AI. There is so many Introduction to Hypernetworks in AI research papers and Ph.D. students doing incredible things in the space, it’s a very exciting time at Introduction to Hypernetworks. 


It’s also realistically nearly impossible to cover, this is why we literally trying to write every day. For more academic insights Introduction to Hypernetworks into AI. I recommend Synced (https://syncedreview.com) if you have a more technical frame of reference. 



Here is Introduction to Hypernetworks in AI Supremacy, we try to share Introduction to Hypernetworks in AI news for everyone.


Quantum Magazine first broke the story. Boris Knyazev of the University of Guelph in Ontario and his colleagues have designed and trained a “hyper network” — a kind of overlord of other neural networks — that could speed up the training process. As a result of neural optimization, you could even make the case that Introduction to Hypernetworks represents a world where AI is building other AI.


Introduction to Hypernetworks Today’s neural networks are even hungrier for data and power. Training them requires carefully tuning the values of millions or even billions of parameters that characterize these networks, representing the strengths of the connections between artificial neurons at Introduction to Hypernetworks in AI.



Introduction to Hypernetworks in AI is Sort of a Big Deal, Why


Given a new, untrained deep neural network designed for some task, the Introduction to Hypernetworks in AI predicts the parameters for the new network in fractions of a second, and in theory, could make training unnecessary. Because the Introduction to Hypernetworks in AI learns the extremely complex patterns in the designs of deep neural networks, the work may also have deeper theoretical implications for Introduction to Hypernetworks in AI.


For now, the Hypernetworks in AI. performs surprisingly well in certain settings, but there’s still room for Hypernetworks in AI to grow — which Hypernetworks in AI is only natural given the magnitude of the problem. If they can solve it, “this will be pretty impactful across the board for machine learning,” said Petar Veličković, a staff research scientist at DeepMind in London.


Given a new, untrained deep neural network designed for some tasks, the Hypernetworks in AI predicts the parameters for the new network in fractions of a second, and in theory, could make training unnecessary. If Hypernetworks in AI training can skip tests, we can build Hypernetworks in AI faster and it can be more involved in the optimization process.



Introduction to Hypernetworks in AI  Training


Currently, the best methods for training and optimizing deep neural networks are variations of a technique called stochastic gradient descent (SGD). One can, in theory, start with lots of architectures, then optimize each one and pick the best. However, this Hypernetworks in AI can be a laggy time-consumer process.


In 2018, Mengye Ren, now a visiting researcher at Google Brain, along with his former University of Toronto colleague Chris Zhang and their adviser Raquel Urtasun, tried a different approach. They designed what they called Graph Hypernetworks in AI (GHN) to find the best deep neural network architecture to solve some tasks, given a set of candidate architectures. The name outlines their approach. “Graph” refers to the idea that the architecture of a deep neural network can be thought of as a mathematical graph — a collection of points, or nodes, connected by lines, or edges.


A Graph Hypernetworks in AI starts with any architecture that needs optimizing (let’s call it the candidate). Graph Hypernetworks in AI then does its best to predict the ideal parameters for the candidate. The team then sets the parameters of an actual neural network to the predicted values and tests Graph Hypernetworks in AI on a given task. Ren’s team showed that this method could be used to rank candidate architectures and select the top performer.


When Knyazev and his colleagues came upon the Graph Hypernetworks in AI idea, they realized they could build upon it. In their new paper (arXiv:2110.13100), the team shows how to use Graph Hypernetworks in AI not just to find the best architecture from some set of samples, but also to predict the parameters for the best network such that it performs well in an absolute sense.(Indrawan Vpp)

Share:

Are Universities Centers of Higher Education or Higher Indoctrination in Academia? - Qlael Practicalintroduction

Reviewed by : Indrawan Vpp


practicalintroduction.com


Introduction


In the Western World, Universities have been held in high regard as institutes of Higher Education were complex and at times highly controversial ideas could be openly critiqued and studied from the vantage point of neutrality. Yet, proud traditions of open discourse are slowly being eroded by those who do not desire open inquiry and who insist that academia bow to a narrow, pseudoscientific, and flawed view of the world, all without the ability to even dare question its validity. 


Read more :


Even once-proud Universities Centers of Higher Education have begun to succumb to this intellectual rot, the giants of education, the University of Oxford which dates to at least 1167 (Oxford, 2018), and the University of Cambridge have discarded their traditions of open inquiry and lay prostrate before destructive ideologies that pit men and women against each other, and drive a wedge deeper between already unstable race relations (Cambridge Equality & Diversity, 2020). Across the United States, Australia, Switzerland, Canada, and the United Kingdom, Universities Centers of Higher Education have slowly but surely begun to bow to the intellectually immature rather than uphold freedom of speech, open inquiry, and freedom of thought. The history of open inquiry and free speech in academia has been a centuries-long battle, a battle between those who desire to think and those who desire to tell others what to think. For centuries those who would censor academic inquiry have recycled the same methodologies to intimidate and de-platform those who would not follow the narrative. 


Today, large swathes of academia have already succumbed to the relentless tide of Philistinism. Academia has become the proverbial canary in the coal mine and what happens in Universities Centers is of great importance as they are a microcosm of what is wider Today, the spread of this same dangerous mindset can clearly be seen. Professors are behaving as zealots and activists rather than remaining neutral where possible and acting as facilitators of complex discussions and problem-solving skills. Professors themselves partake in violent demonstrations (Bostock, 2020) and encourage their inexperienced students, not to act with reason, but condone and justify repulsive behaviors in their students. They have forsaken the art of teaching and instead revel in the power and influence that comes with Indoctrination. Instead of providing stimulating and through-provoking lessons, they water down education and create Safe Spaces as if their students were young children that needed mental protection. And yet that is what they are, intellectual children, unable and unwilling to grasp or deal with anything controversial, there is no logic and no reason, merely recycled ideology. These activist professors and students follow the same methods as those in the Third Reich and the USSR, by making concerted efforts to make ad hominem attacks and never once addressing the issues raised, they attempt to have professors expelled from universities for displaying views that they disagree with (Hilu, 2020), they seek to ban literature that critiques their worldview, they attend speeches and attempt to drown out the speakers (Lynskey, 2018), they justify the use of violence for speech they deem to be “offensive”. This cultish behavior that knows no reason or intellect has made itself the judge, jury, and executioner of an anti-intellectual Academic Inquisition. Just as in the Third Reich and USSR, pseudointellectual and pseudoscientific Disciplines that venerate victimhood and activism and which ought never to be questioned, have become well established in many universities Centers.


practicalintroduction.com


Another area to be considered is that of financial incentives provided only to those projects, fields of research, or ideologies that are considered “Orthodox” and the financial strangulation of “Heterodox” thought. Elsner and Lee (2008) note “that influence has left the main mechanisms of reproduction of the mainstream untouched. These are mass teaching, public advising, journal policies, and faculty recruitment. Above that, the last decade has seen something like a “counterattack” to safeguard these mainstream reproduction mechanisms. The means used for this seem to be journal (and publisher) rankings based on purely quantitative citation measures and “impact factors”. These have an obvious cumulative “economies‐of‐scale” effect which triggers a tendency towards reinforcement and collective monopolization of the dominating orientation. Department rankings and individual faculty evaluations are then based on journals rankings.” Orthodoxy in terms of following a narrative in publishing poses in itself a variety of problems. As publishing in many fields is imperative to receiving continued employment, bonuses, or project funding, it becomes necessary to create research that will not be deemed as potentially Heterodox by reviewers, thereby delaying or disqualifying publication. A gatekeeper effect thus develops (Schweitzer & Saks, 2009) which establishes a model of persuasion, and gradually a sense of Orthodoxy is formed, due to the dominating orientation in Higher Education. 


From Ideological Orthodoxy thus arises a concept of monopoly on truth, even when such “truths” are purely philosophical and/or ideological. Campbell (2019) goes on to state that in such an academic climate, Academia is forced to “go along with” or keep differences to themselves, going on to show that this is enforced by pressures in building an academic career because it has a direct correlation to “writing and research, teaching and grading, hiring and firing, and public commentary”. The field of Orthodoxy then become progressively narrower and contradicted. Newly graduated scholars in Academia have no choice but to follow the popular ideology as failure to do so would impede or disqualify them from any progress or position in academia, where the support of superiors and peers is central to creating recognition within the meritocracy. Meritocracy itself then becomes more about who one knows and how much one is willing to follow the trend than actual merit. There is historic precedence. 


Returning to the Third Reich, the idea of defunding to de-platform opposing views was used extensively. An example of this is the field of Prehistoric Archaeology which before the Rise of National Socialism in Germany, did not have extensive funding. The usefulness of this field was seen by the NAZIs to create a national Zeitgeist and build pride and nostalgia in Germany’s past, hence, funding was increased greatly, and Arnold (1990) describes three resulting factions in Academia that resulted, “The Party-Liners”, “the Mitläufer” and “the Opposition”, all of which can be seen in the modern context is a true recreation of Reich-like conditions. The Party-Liners were Academia willing and ready to espouse “politically correct” research. Academia such as Herman Wille, Wilhelm Teudt, and Oswald Menghini helped bolster the Zeitgeist of the NAZI Party with below par, yet for the period “Orthodox” scholarship. The Mitläufer, as the name suggests, are those who “walk with” or “blindly follow” so as to receive funding and keep their position. In Germany, this meant blindly and passively teaching the Orthodox doctrines created by the National Socialists. An imperative part of passive sanctioning of the ideology itself, all for funding. The final group, “The Opposition” are those who did not, or refused to fit the first two categories. Essentially, this Academia was given a choice, follow the politically correct Orthodox views and research or lose funding and positions. Hugo Obermaier stated that he turned down a position as Chair at the University of Berlin because “National Socialists had already taken possession of the field”. Another example is Franz Weidenreich who was forced out as Chair of the University of Frankfurt. This financial undermining of heterodox Academia was the foundation of the next phase of censorship.


Returning once again to the present, consider how the next phase of censorship takes place. Once dissenting and heterodox views have been sufficiently silenced, and once policy takes sides with a particular ideological view, the work of Academia can be attacked. A modern phenomenon is not unlike the past labels used namely “harmful and undesirable” where the morality of a view is called into question and used as justification for the attack. A favorite label in modern times is the label “hate speech” for any view that challenges the Orthodox narrative even when such arguments are based on science or logic. This label of hate speech is then used to argue that the view or thoughts of said Academia causes social or moral decay or that it is emotionally damaging. Activists (often including academics) then demand the removal of funding for the individual targeted by boycotting their lectures or encouraging others to do so as was the case with Professor William Jacobson (Allen, 2020). Another method is the making of false allegations of ethical/ academic/ professional misconduct such as those leveled against Professor Dorian Abbot (Klinghoffer, 2020) and Professor Janice Fiamengo (Robertson, 2016). Another example was the strong Zeitgeist in the attack on Professor Jordan Peterson for refusal of using pronouns (Murphy, 2016) and then again with the University of Cambridge rescinding its offer of visiting fellowship due to pressure from activist students (Marsh, 2019). Then there are the limiting factors that take place when free speech is used, as was the case with Professor Gad Saad of Concordia University whose views have prevented him from climbing the academic ladder and who must be escorted by security for his own safety on campus (Shah, 2019). With the great potential backlash on the professional, Academia, and public front for even tenured professors, there is little wonder that career academics without tenure and new graduate scholars would be vary wary of not following Orthodoxies.


Compare with the Third Reich, Academia de-platforming mobs were also used for those who remained obstinate, as was the case with Gero von Merhart who was defamed, publically maligned, and defamed. Jacob-Friesen bravely spoke out against what he viewed as perversions in research namely the dogma of the superiority of race and culture, he was promptly sent a letter warning him that dissent would not be tolerated. As time progressed, blacklists were created, and works by certain authors and scholars were either banned (and often burned) or severely restricted with only those deemed loyal to the party being permitted to “study the works of the enemy”. Book burning had strong support among professors. Professor of German Philology Hans Naumann and Professor of Art History Eugen Lűthgen actively encouraged students to burn books that could “mislead them”. At Technical University, Professor of German Literature Freidrich Neumann and the Director of the Institute Literature and Theatre Gerhard Fricke led the burning of books and called it a symbol of purification that comes from burning trash. A favorite book to burn was “A History of Germany” by the German author August von Kotzebue who was murdered by a student activist. During all of this, not a single university protested the censorship and all gave their support. Joseph Goebbels had already declared that such writers and Academia, who wished to critique the nation, should be put against a wall and shot (Lewy, 2016).



Conclusion

Extremism and radicalism only flourish like mushrooms in an intellectually dark and unventilated space, the light and ventilation of open critique, free speech, and open inquiry are the only way to prevent such parasitic growths. Professors and students alike must be open to discussing and understanding potentially volatile issues in a civilized, intellectual, and polite manner thereby reaching the core of issues and building intellectual capability. Professors must make earnest efforts to remain as neutral as possible on controversial issues and when giving their own views must clearly state that it is only their view. It is obvious that a balance needs to be regained in academia along with a return to mediocracy. The time in which to stop history repeating is short, the time to save Academia from its own indulgence all the more so. Now is the time to choose between Higher Education or Higher Indoctrination in Academia.


References

  • Imperial Moscow University: 1755-1917: encyclopedic dictionary. Moscow: Russian political encyclopedia (ROSSPEN). A. Andreev, D. Tsygankov. 2010. pp. 226–227. ISBN 978-5-8243-1429-8.
  • Bostock, B. (2020). Video shows the college professor who pretended to be Black attacking the NYPD, accusing it of brutalizing “our” people at BLM protests. [online] Insider. Available at: https://www.insider.com/video-professor-pretend-black-attacks-ny-councilracism-police-blm-2020-9 [Accessed 6 Dec. 2020].
  • Carmon, A., (1976). The Impact of the Nazi Racial Decrees on the University of Heidelberg. Yad Vashem Studies, 11, pp.131-163.
  • Galileo Monument (2010.). Multimedia Catalogue - Glossary - Monumental tomb of Galileo. [online] Available at: http://webarchive.loc.gov/all/20100805135633/ http://brunelleschi.imss.fi.it/museum/esim.asp?c=100359 [Accessed 4 Dec. 2020].

Share:

Artificial Intelligence for Energy Efficiency and Renewable Energy – 6 Current Applications - Qlael Practicalintroduction


The U.S. Energy Information Agency (EIA) defines renewable energy as an energy source that naturally regenerates, such as solar or wind. In contrast, fossil fuels are considered finite. The EIA reports that in 2016, 10 percent of all energy consumed in the U.S. was derived from renewable energy sources. This is equivalent to roughly “10.2 quadrillion British thermal units (Btu) —1 quadrillion is the number 1 followed by 15 zeros”.


Despite the increasing use of renewables which notably became the leading global source of electricity in 2015, there are still persistent barriers to wider implementation related to policy and technology. Researchers and companies are exploring how artificial intelligence could assist in improving the accessibility and efficiency of renewable energy technology.


In this article, we present examples of renewable energy technologies which incorporate AI. We cover three major categories of renewable technologies that should be of interest to business leaders in the green energy space:


  • Energy Forecasting  – Industry data is used to train AI algorithms to make accurate forecasts, helping to inform power supply and demand
  • Energy Efficiency – AI is used to track and optimize how energy efficiency
  • Energy Accessibility – AI is used to model utility cost savings and provide recommendations for smart home investments

For each application, we provide a company overview, an explanation of how the platform functions, and outcome data and/or results were available. Each example is organized under a sub-heading which serves as a quick reference when navigating through the article.



AI for Energy Forecasting


Xcel


A consistent challenge with renewable energy sources such as wind and solar power is their unreliability. Weather-dependent power sources will often fluctuate in their strength.


In Colorado, energy provider Xcel is implementing AI in an attempt to address these challenges. Through the National Center for Atmospheric Research’s new AI-based data mining method, Xcel was able to reportedly access weather reports with a higher level of accuracy and detail.


This meant that greater precautions could be taken to harness and preserve the energy that was generated. In order to provide these detailed weather reports, the AI system mines a combination of data from local satellite reports, weather stations as well as wind farms in the surrounding area.  The algorithms driving the system are trained to identify patterns within these data sets and make predictions based on those data points.  


Xcel reports that wind power has doubled in Colorado since 2009. Earlier this year, Xcel reported plans to expand its wind farms by 50 percent by the year 2021.


Nnergix


Weather can often be unpredictable, destabilizing the power supply generated from weather-dependent energy sources such as solar and wind. This puts pressure on the renewable energy sector to efficiently balance supply and demand.


Historically, weather forecasts have helped energy suppliers make predictions regarding their power supply.  Today, companies such as Nnergix are incorporating artificial intelligence to improve the accuracy of renewable energy forecasting.


Nnergix is a data mining and web-based energy forecasting platform which pools data from the energy industry. The company reportedly combines satellite data from weather forecasts and machine learning algorithms trained on industry data to make more accurate forecasts.


High-resolution weather forecasts appear to be generated from satellite images. Large-scale and smaller-scale weather models are reportedly generated based on these images. The machine learning algorithms analyze these data and can then predict the state of the atmosphere for a particular area.


For example, the company offers three main services including a solar energy solution, and claims that forecasts can range from 6 hours to ten days in advance with data updates occurring eight times a day. Reports can be delivered in multiple formats in as depicted in the graph below:


practicalintroduction.com


To delve deeper into AI applications of weather forecasting, readers may find our article titled AI for Weather Forecasting – In Retail, Agriculture, Disaster Prediction, and More to be a useful resource.


AI for Energy Efficiency

 

Verdigris Technologies


Founded in 2011, California-based Verdigris Technologies offers a cloud-based software platform that claims to leverage artificial intelligence to help clients optimize energy consumption. Designed for large commercial buildings and managers of enterprise facilities, the process begins with the installation of IoT hardware.


Smart sensors are directly attached to the client’s electrical circuits to track energy consumption. The data captured by the sensors is sent to the cloud “securely over Wi-Fi or Verizon 4G/LTE.” and is presented to the client on a dashboard that is accessible online 24/7.


The involvement of Verizon runs a bit deeper than a wireless connection. In October 2016, corporate Venture Capitalist firm Verizon Ventures made an undisclosed investment in a Series A funding round amounting to $6.7 million. To date, Verdigris has raised over $16.5 million in total funding.


Every appliance tends to have a unique electrical footprint, as a result, the algorithms have been designed to be able to identify each unique energy source while still providing a comprehensive analysis of the captured by the smart sensor hardware. The data analysis is communicated to Verdigris’ cloud-based servers.


In one case study, Verdigris reports that it worked with W Hotel San Francisco to identify energy inefficiencies in the hotel’s commercial kitchen. Within a three-month period, the company reportedly identified inefficiencies that were costing the hotel more than $13,000 in preventable annual losses.


Google DeepMind


Founded in London in 2010 and acquired by Google in 2014, AI company DeepMind Technologies Ltd. reportedly reduced the amount of energy required to cool Google’s data centers by 40 percent.

DeepMind reported these results in July 2016, however, the company claims that it first began applying machine learning two years prior to improve energy usage. Specifically, a set of data center operating scenarios and parameters were used to train a system of neural networks. The neural network “learned” how the data center functioned and began identifying opportunities for optimization.


Google claims that data was pulled from thousands of sensors located in the data centers. Information collected included temperature and power consumption. Power Usage Effectiveness is defined as the ratio of “total building energy usage to IT usage” and was used to train the neural networks. The PUE model helps ensure efficiency so when the neural network system provides recommendations they do not exceed operating constraints.


The graph below depicts an average day where the model was tested using live data and indicates when machine learning recommendations were switched on and off.


practicalintroduction.com


Image credit: DeepMind


The Google data centers house the servers that power Google’s top applications including Gmail and Youtube which estimate over a billion users, representing roughly one-third of all internet users. Google’s capital expenditures have been reported to mainly support data center operations and improvements. In 2016, total expenditures reached an estimated $10.9 billion, up from $9.9 billion in 2015.


(DeepMind’s own Dr. Nando de Freitas joined us on our AI in industry podcast in 2016 to explain “deep learning” in simple terms – and the episode is one of our most popular of all time. Listen to the episode on Soundcloud or iTunes.)


AI for Energy Accessibility


PowerScout


In an effort to improve consumer education and access to renewable energy technologies, PowerScout reportedly uses AI to model potential savings on utility costs using industry data.


The company reportedly leverages data analytics to identify “smart home improvement projects” based on the unique features and energy usage in a client’s home. PowerScout’s algorithm appears to match clients to potential hardware installation providers in an online marketplace format to ensure competitive rates. 


Essentially, the AI acts as a marketplace advisor, providing recommendations to help clients make informed decisions regarding to make renewable energy technologies purchases for their homes. We imagine that this use of AI is similar to the recommendation capabilities seen in other marketplace businesses (which we’ve covered in greater depth in our recommendation engine use-case article). The development team claims the platform has collectively overseen the installation of solar capacity roughly equivalent to powering 250,000 homes as of March 2017.


PowerScout lists Google and the US Department of Energy among its partners. In fact, the company is the recipient of two grants from the US Department of Energy amounting to a total of $2.5 million.


Verv


Verv is an AI-powered home assistant created by London-based Green Running Ltd. The system reportedly uses its technology to assist clients with energy management in their homes.


Verv supplies energy data on home appliances and itemizes energy costs on a consistent basis. Users are reportedly able to see a record of how each appliance in their home uses energy and monitor and regulate their energy expenses before bills are due.


When a household appliance is turned on, the algorithms driving the AI assistant recognize patterns and can automate a running tally of the energy costs that the item is generating. Verv also reportedly has several safety features that provide notifications when devices are left on for prolonged periods of time as well as tips to reduce a household’s carbon footprint. The app is available for tablets, laptops, and Smartphones.


Concluding Thoughts and Future Outlook


The renewable energy sector is a growing economic force and an effective strategy towards improving environmental sustainability. Artificial intelligence is being integrated across major sectors of this industry, increasing the capacity of data analytics.


The variable nature of weather presents inherent challenges which may cause suppliers to rely on traditional energy sources to meet consumer demands. Therefore, AI-driven energy forecasting platforms may hold promise for providing energy suppliers with the data required to respond to fluctuations that may negatively affect operations and to plan accordingly.


2015 was a banner year for renewables evidenced by the commitments by the G7 and G20 to accelerate implementation and improve overall energy efficiency. However, overcoming the barriers to widespread and accelerated implementation will require ongoing evidence of benefits, particularly in the economic and political arenas.


Platforms that can accurately identify cost savings and energy efficiency for consumers and companies will prove valuable in the near term.


Readers interested in understanding how AI is being used in the traditional energy sector may find Artificial Intelligence in Oil and Gas – Comparing the Applications of 5 Oil Giants to be a useful read.(Indrawan Vpp)


Header image credit: itpro.co.uk

Share:

AI Play A Major Role in Boosting Renewable Energy - Qlael Practicalintroduction

practicalintroduction.com


INTRODUCTION

In the coming years, the world will watch as AI, machine learning, and data science transform the economy and our day-to-day lives. Perhaps no AI-enabled change has greater implications for mankind, however, than the reshaping of the energy sector. The energy sector is generally regarded as conservative in mindset and therefore slow to adopt digital technology.  After all, much of the technology that powers modern life –– coal, oil, the electrical grid –– has remained largely unchanged since the late 19th century. Yet a 2017 McKinsey report classified resources and utilities as in the middle of the pack in terms of digitization, above retail, education, and health care but behind financial services, automotive and manufacturing, and, of course, the technology sector.


However, the last few years have seen significant technological changes in the energy economy:


  • Oil and natural gas prices are low because of pioneering technology that allows companies to affordably access resources that were previously considered uneconomical.
  • Electric utilities are using machine learning to better understand their customers and deploy their resources more efficiently, cutting costs for the utility and consumers alike.
  • Meanwhile, the prospect of a renewable revolution becomes more realistic by the day largely because of major advances in AI that help generators maximize the impact of the sunshine and wind they are harnessing.


The drive to make utilities more efficient through AI, machine learning, and data science has resulted in major benefits for every actor in the energy sector, including generators, distributors, the environment, taxpayers, and consumers. There is still much to do, however, resource and utility companies that hope to remain competitive in the coming years should be aggressively pursuing the next technological frontier. This white paper will cover some of the highest-value use cases in the utilities and energy industries as well as suggested paths to scale up AI competencies within these organizations.


AI is most certainly going to play a major role in boosting renewable energy. While renewable sources, notably solar and wind power, are on the rise, they are still not capable of being the dominant energy sources due to their intermittent nature. While there have been promising advances in battery storage technology that allows utilities to store power generated from intermittent sources and dispatch it when needed, the devices remain too expensive for widespread adoption. That explains why, despite investing heavily in renewable projects, China also continues to build new coal plants at breakneck speed. And while renewables have surpassed coal in the United States, they still lag far behind natural gas, which is both cheap and dispatchable.


Data is the key to helping utilities make the most of available renewable sources. For example, AI-powered predictive analytics based on historical data help utilities forecast the weather with a precision that would have been unfathomable until recently.

 

Being able to predict how much sun or wind will be available two hours from now allows utilities to determine how much generated energy they should store. Predictive analytics will also help utilities optimize the search for wind or solar-generating properties so that they can know exactly how much power a given parcel of land is expected to produce. Money is pouring into ventures aimed at making renewable power more cost-effective through digital technology and AI. In fact, conventional energy giants, such as ExxonMobil, Southern Company, and Tokyo Gas are investing in renewables in anticipation of a greener energy economy in the coming decades.


In Texas, the longtime heart of America’s oil and gas economy is also the source of much of the country’s emerging wind sector; ExxonMobil is powering its oil operations in the Permian Basin with wind energy.(Indrawan Vpp)

Share:

Giant Banking HSBC Partners with Metaverse Firm The Sandbox - Qlael Practicalintroduction

practicalintroduction.com


HSBC is the latest corporate giant to enter the metaverse through a partnership with The Sandbox.


According to a blog post published by The Sandbox on Wednesday, the British bank has acquired a plot of land in the metaverse startup’s virtual world — space that will be developed to entertain sports, e-sports, gaming, and finance professionals.

Details about the exact nature of the initiative were scant, but HSBC’s Suresh Balaji, chief marketing officer for the Asia-Pacific region, said in the statement that the metaverse “is how people will experience web3, the next generation of the internet.”

“At HSBC, we see great potential to create new experiences through emerging platforms, opening up a world of opportunity for our current and future customers and for the communities we serve,” he continued.

An image attached to the blog post depicts a pixelated plot of land, complete with an HSBC-branded rugby stadium. 


The Sandbox is a subsidiary of Animoca Brands, the Hong Kong gaming company. The firm raised $93 million in a round led by SoftBank Vision Fund 2 in November last year.

Alongside HSBC, companies such as Gucci, Warner Music Group, and Ubisoft have flocked to The Sandbox’s metaverse.

Its co-founder and COO Sebastien Borget said the interest of companies like HSBC in the metaverse signals “the beginning of a broader adoption of web3 and the metaverse by institutions driving brand experiences and engagement within this new ecosystem.”(Indrawan Vpp)

Share:

A Case Study in Intelligence Failure (The 7/7 Attacks) - Qlael Practicalintroduction

Reviewed by : Indrawan Vpp

practicalintroduction.com


The UK intelligence community failed to foresee the terrorist attack on the London transport system that took place on 7th July 2005. The intelligence failure can be seen at both a strategic and a tactical level. The strategic failure lay in the underestimation of the threat posed by domestic terrorism. The tactical failure lay in the failure of the Security Service to discover the plot of the 7/7 bombers before they had a chance to carry it out. The JTAC had made the assessment, based on the advice of MI5, that no terrorist group had the intent and capability to carry out an attack against the UK. The threat level was decreased one month before the attack as a result of this strategic assessment.


MI5 failed to uncover the plot despite having an opportunity to do so. Two members of the 7/7 cell had been linked to members of another terrorist cell discovered to be planning attacks in the UK. These two men were Mohammed Sidique Khan and Shaheed Tanweer. The two men were classed as ‘desirable’ rather than ‘essential’ targets, due to MI5’s belief gathered from intercepted conversations that they were interested in financial fraud rather than attack planning. Once operation CREVICE came to its conclusion MI5’s system of prioritization meant that these targets, being only ‘desirable’, weren’t investigated further with any vigor. If a target that is not deemed ‘essential’ goes on to commit a terror attack it is clear that an intelligence failure has taken place. Several pieces of evidence were overlooked by MI5 that, if pieced together, would have indicated a greater interest in carrying out an attack than was concluded. If these men were more thoroughly investigated this could have led to the attack being averted.


Why did the intelligence operation fail?

Ultimately, the intelligence operation failed because the extent of the threat from domestic terrorism wasn’t fully appreciated, which led to failures at both the strategic and tactical levels. This was due to the UK intelligence community’s faith in the strategic concept of the ‘Covenant of Security’. The presence of Islamist extremism had been tolerated in the UK - compared to the rest of Europe - under the understanding that no attacks would be carried out on British soil. This concept has been described as pervading every aspect of the UK’s intelligence apparatus. The British government had put their faith in extremists such as Abu Hamza, Abu Qatada, and Omar Bakri Mohammed. However, following the increased stringency of post-9/11 anti-terror legislation, this covenant began to crumble, which was indicated by those very same men. The view that there was a limited threat from Islamist extremism domestically was not updated. This may have been the reason why when French intelligence made an assessment that an attack was being planned from the UK Pakistani community the intelligence services weren’t unduly concerned. There was a belief, according to the Director-General of the Security Service, that terrorist capability had been dented from the successful conclusions of operations CREVICE and RHYME. 


This demonstrates the intelligence failure at the strategic level - the view that the threat from domestic terrorism was less extreme than it was in reality. This view had a knock-on effect: MI5 wasn’t equipped with sufficient resources to fully cover the threat. In the words of Jonathan Evans, MI5 was only capable of “hitting the crocodiles closest to the boat”. There was enough available evidence to justify further investigation of Mohammed Sidique Khan and Shaheed Tanweer, but a lack of resources meant that MI5 had to ruthlessly prioritize those showing a clear threat to life. MI5 didn’t have the resources to discover all possible threats therefore weren’t aware of their true extent. They were unable to challenge the accepted view and therefore the assessment was made that no groups were actively planning on carrying out attacks on the UK.


MI5 also made mistakes at a tactical level in the course of this operation. They should have been able to bring together several pieces of available evidence about MSK and ST that would have constructed profiles suggesting higher levels of significance. Several of those pieces of evidence were missed due to differing spellings of the name ‘Sidique’. It was missed that MSK and ST likely attended the ‘farewell meal’ of one of the CREVICE plotters, indicating a high level of trust and closeness to several attack planners. These mistakes were largely due to a lack of available man-hours and specialist personnel, especially translators and transcribers.


If an effort was made to build a profile of the two men, then they would at least have been moving towards ‘essential’ status. Taking the example of MSK, there was present: evidence of travel to Pakistan where it was believed he engaged in terrorist activity; a link to an address at an extremist bookshop; repeated connections to the CREVICE plotters; a link to another, separate investigation; and an expression of admiration for the Madrid bombings. All of these pieces of evidence were available to MI5 and should have been indicative of someone who was in danger of carrying out an extremist attack on the UK.


The assessment was ultimately made, based on the available evidence, that the two men were not involved in attack planning. Directed surveillance on MSK and ST, justified by their profiles, would probably have led to the discovery of the bomb factory at 18 Alexandra Grove and thus evidence that they were in the stages of planning an attack. Only through investigation of this type would attack planning have been revealed, as the plotters took efforts to ensure it was kept a secret.


MI5 wasn’t able to justify resource-expenditure on the intensive investigation without evidence of attack planning, and they weren’t able to gather evidence of attack planning without such intensive investigation. This resource-mandated catch-22 is part of the reason why there was a failure at a strategic level in underestimation of the number of groups intent on carrying out attacks, and why there was a failure at a tactical level to discover the plot.


What challenges confronted those conducting the operation?

There were three main challenges facing MI5. Firstly, the most fundamental challenge was a lack of resources. There simply was not enough available personnel, equipment, or funding to be able to properly follow up on all the leads they came across in the course of their investigations. This resource squeeze meant that they had to be quite ruthless in deciding who was a priority and who wasn’t, which inevitably led to some falling through the cracks.


Secondly, a lack of translators and transcribers meant failure to pick up on elements of overheard conversations that would have given the security services further reason to investigate MSK. A conversation captured by a listening device would if properly transcribed and translated, have revealed that MSK was present at the farewell meal for Omar Kyam and had professed an admiration for the success of the Madrid bombings – two things that would surely have sent him further up MI5’s priorities. However, because there weren’t enough staff familiar with the mix of languages the men spoke these pieces of evidence were missing.


Thirdly, domestic terrorist plots take place in a fast-paced environment where there are many changes in the intentions and capabilities of involved persons. There are certainly too many persons in the UK holding extremist views for all of them to be fully investigated, and the journey from extremist to attack planner can take place over as little a time period as a few weeks. This challenge puts immense time pressure on the security services when they do discover attack plans as there is often a very small window to act in. These three factors made uncovering the plot very difficult for MI5.


At what point was the failure of the operation inevitable?

The failure of MI5 to discover the plot became inevitable once it was decided that MSK and ST weren’t enough of a priority to merit further investigation. A deeper look into their activities would have affirmed their extremist links and potentially have led to the discovery of the bomb factory and evidence of the plot. With operation CREVICE wrapped up and the focus of MI5’s resources directed toward operation RHYME and then other investigations the intentions of the 7/7 plotters became a mystery only discoverable by accident to those outside of the intelligence community.


Was the failure of the underlying intelligence concept inevitable?

It has been shown above that the failure of this particular operation wasn’t always inevitable, the professionalism and expertise of MI5 could have generated success in this instance much as they did in operations RHYME and CREVICE. However, there were aspects of the UK intelligence community that meant a failure of this type was inevitable eventually. These aspects were based upon undue political influence on the assessments of the threat level. The strategic concept of the covenant of security led to downward pressure on assessments of the overall threat from domestic terrorism. These effects were recognized by the ISC, stating in their report that the development of the home-grown threat was not fully understood or applied to strategic thinking. This inevitably led to a situation in which MI5 was under-resourced, under-staffed, and unable to measure the true extent of the domestic threat. Therefore, an intelligence failure of this type was inevitable at some point.


References

  1. Andrew, Christopher (2010). The Defence of the Realm: The Authorized History of MI5. London, Allen Lane (2nd ed).
  2. Black, Crispin (2005). 7/7 The London Bombs: What went wrong? London, Gibson Square Books.
  3. Curtis, Mark (2012). Secret Affairs: Britain’s Collusion with Radical Islam. London: Serpent’s Tail.
  4. Intelligence and Security Committee (2006). Report into the London Terrorist Attacks on 7th July 2005. London, HMSO.


Share:

Introduction About Epistemic Justification for Basic - Qlael Practicalintroduction

Reviewed by : Indrawan Vpp

practicalintroduction.com


We often believe what we are told by our parents, friends, doctors, and news reporters. We often believe what we see, taste, and smell. We hold beliefs about the past, the present, and the future. Do we have a right to hold any of these beliefs? Are any supported by evidence? Should we continue to hold them, or should we discard some? These questions are evaluative. They ask whether our beliefs meet a standard that renders them fitting, right, or reasonable for us to hold. One prominent standard is epistemic justification.


Very generally, justification is the right standing of an action, person, or attitude with respect to some standard of evaluation. For example, a person’s actions might be justified under the law, or a person might be justified before God.


Epistemic justification (from episteme, the Greek word for knowledge) is the right standing of a person’s beliefs with respect to knowledge, though there is some disagreement about what that means precisely. Some argue that right standing refers to whether the beliefs are more likely to be true. Others argue that it refers to whether they are more likely to be knowledgeable. Still, others argue that it refers to whether those beliefs were formed or are held in a responsible or virtuous manner.


Because of its evaluative role, justification is often used synonymously with rationality. There are, however, many types of rationality, some of which are not about a belief’s epistemic status and some of which are not about beliefs at all. So, while it is intuitive to say a justified belief is a rational belief, it is also intuitive to say that a person is rational for holding a justified belief. This article focuses on theories of epistemic justification and sets aside their relationship to rationality.


In addition to being an evaluative concept, many philosophers hold that justification is normative. Having justified beliefs is better, in some sense, than having unjustified beliefs, and determining whether a belief is justified tells us whether we should, should not, or may believe a proposition. But this normative role is controversial, and some philosophers have rejected it for a more naturalistic, or science-based, role. Naturalistic theories focus less on belief-forming decisions—decisions from a subject’s own perspective—and more on describing, from an objective point of view, the relationship between belief-forming mechanisms and reality.


Regardless of whether justification refers to the right belief or responsible belief, or whether it plays a normative or naturalistic role, it is still predominantly regarded as essential for knowledge. This article introduces some of the questions that motivate theories of epistemic justification, explains the goals that a successful theory must accomplish, and surveys the most widely discussed versions of these theories.


Explaining Why Justification is Valuable

A third central aim of theories of justification is to explain why justification is epistemically valuable. Some epistemologists argue that justification is crucial for avoiding error and increasing our store of knowledge. Others argue that knowledge is more complicated than attaining true beliefs in the right way and that part of the value of knowledge is that it makes the knower better off. These philosophers are less interested in the truth-goal in its unqualified sense; they are more interested in intellectual virtues that position a person to be a proficient knower, virtues such as intellectual courage and honesty, openness to new evidence, creativity, and humility. Though justification increases the likelihood of knowledge under some circumstances, we may rarely be in those circumstances or may be unable to recognize when we are; nevertheless, these philosophers suggest, there is a fitting way of believing regardless of whether we are in those circumstances.


A minority of epistemologists reject any connection between justification and knowledge or virtue. Instead, they focus either on whether a belief fits into an objective theory about the world or whether a belief is useful for attaining our many diverse cognitive goals. An example of the former involves focusing solely on the causal relationship between a person’s beliefs and the world; if knowledge is produced directly by the world, the concept of justification drops out (for example, Alvin Goldman, 1967). Other philosophers, whom we might call relativists and pragmatists, argue that epistemic value is best explained in terms of what most concerns us in practice.


Debates surrounding these three primary aims inspire many others. There are questions about the sources of justification: Is all evidence experiential, or is some non-experiential? Are memory and testimony reliable sources of evidence? And there are additional questions about how justification is established and overturned: How strongly does a reason have to be before a belief is justified? What sort of contrary, or defeating, reasons can overturn a belief’s justification? In what follows, we look at the strengths and weaknesses of prominent theories of justification in light of the three aims just outlined, leaving these secondary questions to more detailed studies.


Justification and Knowledge

The type of knowledge primarily at issue in discussions of justification is the knowledge that a proposition is true or propositional knowledge. Propositional knowledge stands in contrast with knowledge of how to do something or practical knowledge. (For more on this distinction, see Knowledge.) Traditionally, three conditions must be met in order for a person to know a proposition—say, “The cat is on the mat.”


First, the proposition must be true; there must actually be a state of affairs expressed by the proposition in order for the proposition to be known. Second, that person must believe the proposition, that is, she must mentally assent to its truth. And third, her belief that the proposition is true must be justified for her. Knowledge, according to this traditional account, is justified true belief (JTB). And though philosophers still largely accept that justification is necessary for knowledge, it turns out to be difficult to explain precisely how justification contributes to knowing.


Historically, philosophers regarded the relationship between justification and knowledge as strong. In Plato’s Meno, Socrates suggests that justification “tethers” true belief “with chains of reasons why” (97A-98A, trans. Holbo and Waring, 2002). This idea of tethering came to mean that justification—when one is genuinely justified—guarantees or significantly increases the likelihood that a belief is true, and, therefore, we can tell directly when we know a proposition. But a series of articles in the 1960s and 1970s demonstrated that this strong view is mistaken; justification, even for true beliefs, can be a matter of luck. For example, imagine the following three things are true: (1) it is three o’clock, (2) the normally reliable clock on the wall reads three o’clock, and (3) you believe it is three o’clock because the clock on the wall says so. But if the clock is broken, even though you are justified in believing it is three o’clock, you are not justified in a way that constitutes knowledge. You got lucky; you looked at the clock at precisely the time it corresponded with reality, but its correspondence was not due to the clock’s reliability. Therefore, your justified true belief seems not to be an instance of knowledge. This sort of example is characteristic of what I call the Gettier Era (§6). During the Gettier Era, philosophers were pressed to revise or reject the traditional relationship.


In response, some have maintained that the relationship between justification and knowledge is strong, but they modify the concept of justification in an attempt to avoid lucky true beliefs. Others argue that the relationship is weaker than traditionally supposed—something is needed to increase the likelihood that a belief is a knowledge, and justification is part of that, but justification is primarily about responsible belief. Still, others argue that whether we can tell we are justified is irrelevant; justification is a truth-conducive relationship between our beliefs and the world, and we need not be able to tell, at least not directly, whether we are justified. The Gettier Era (§6) precipitated a number of changes in the conversation about justification’s relationship to knowledge, and these remain important to contemporary discussions of justification. But before we consider these developments, we address the DIJ.


The Value of Justification

Each of the theories of justification reviewed in this article presumes something about the value of justification, that is, about why justification is good or desirable. Traditionally, as in the case of Theatetus noted above, justification is supposed to position us to understand reality, that is, to help us obtain true beliefs for the right reasons. Knowledge, we suppose, is valuable, and justification helps us attain it. However, skeptical arguments, the influence of external factors on our cognition, and the influence of various attitudes on the way we conduct our epistemic behavior suggest that attaining true beliefs for the right reason is a forbidding goal, and it may not be one that we can access internally. Therefore, there is some disagreement as to whether justification should be understood as aimed at truth or some other intellectual goal or set of goals.



The Truth Goal

All the theories we have considered presume that justification is a necessary condition for knowledge, though there is much disagreement about what precisely justification contributes to knowledge. Some argue that justification is fundamentally aimed at truth, that is, it increases the likelihood that a belief is true. Laurence BonJour writes, “If epistemic justification were not conducive to truth in this way…then epistemic justification would be irrelevant to our main cognitive goal and of dubious worth” (1985: 8). Others argue that there are a number of epistemic goals other than the truth and that in some cases, truth need not be among the values of justification. Jonathan Kvanvig explains:


[I]t might be the case that truth is the primary good that defines the theoretical project of epistemology, yet it might also be the case that cognitive systems aim at a variety of values different from the truth. Perhaps, for instance, they typically value well-being, or survival, or perhaps even reproductive success, with truth never really playing much of a role at all. (2005: 285)


Given this disagreement, we can distinguish between what I will call the monovalent view, which takes truth as the sole, or at least fundamental, the aim of justification, and the polyvalent view (or, as Kvanvig calls it, the plurality view), which allows that there are a number of aims of justification, not all of which are even indirectly related to truth.


Alternatives to the Truth Goal

One motive for preferring the monovalent view is that, if truth is not the primary goal of justification—that is, it connects belief with reality in the right way—then one is left only with goals that are not epistemic, that is, goals that cannot contribute to knowledge. The primary worry is that, in rejecting the truth goal, one is left with pragmatism. In response, those who defend polyvalence argue that, in practice, there are other cognitive goals that are (1) not merely pragmatic, and (2) meet the conditions for successful cognition. Kvanvig explains that “not everyone wants knowledge…and not everyone is motivated by a concern for understanding. … We characterize curiosity as the desire to know, but small children lacking the concept of knowledge display curiosity nonetheless” (2005: 293). Further, much of our epistemic activity, especially in the sciences, is directed toward “making sense of the course of experience and having found an empirically adequate theory” (ibid., 294). Such goals can be produced without appealing to the truth at all. If this is right, justification aims at a wider array of cognitive states than knowledge.


Another argument for polyvalence allows that knowledge is the primary aim of justification but that much more is involved in justification than truth. The idea is that, even if one were aware of belief-forming strategies that are conducive to truth (following the evidence where it leads; avoiding fallacies), one might still not be able to use those strategies without having other cognitive aims, namely, intellectual virtues. Following John Dewey, Linda Zagzebski says that “it is not enough to be aware that a process is reliable; a person will not reliably use such a process without certain virtues” (2000: 463). As noted above, virtue responsibilities allow that the goal of having a large number of true beliefs can be superseded by the desire to create something original or inventive. Further still, following strategies that are truth-conducive under some circumstances can lead to pathological epistemic behavior. Amélie Rorty, for example, argues that belief-forming habits become pathological when they continue to be applied in circumstances no longer relevant to their goals (Zagzebski, ibid., 464). If this argument is right, then truth is, at best, an indirect aim of justification, and intellectual virtues like openness, courage, and responsibility may be more important to the epistemic project.


Objections to the Polyvalent View

One response to the polyvalent view is to concede that there are apparently many cognitive goals that fall within the purview of epistemology but to argue that all of these are related to truth in a non-trivial way. The goal of having true beliefs is a broad and largely indeterminate goal. According to Marian David, we might fulfill it by believing a truth, by knowing a truth, by having justified beliefs, or by having intellectually virtuous beliefs. All of these goals, argues David, are plausibly truth-oriented in the sense that they derive from, or depend on, a truth goal (David 2005: 303). David supports this claim by asking us to consider which of the following pairs is more plausible:


  • A1. If you want to have TBs [true beliefs] you ought to have JBs [justified beliefs].
  • A2. We want to have JBs because we want to have TBs.
  • B1. If you want to have JBs you ought to have TBs.
  • B2. We want to have TBs because we want to have JBs. (2005: 303)


David says, “[I]t is obvious that the A’s [sic] are way more plausible than the B’s. Indeed, initially one may even think that the B’s have nothing going for them at all, that they are just false” (ibid.). This intuition, he concludes, tells us that the truth-goal is more fundamental to the epistemic project than anything else, even if one or more other goals depend on it.


Almost all theories of epistemic justification allow that we are fallible, that is, that our justified beliefs, even if formed by reliable processes, may sometimes be false. Nevertheless, this does not detract from the claim that the aim of justification is true belief, so long as it is qualified as true belief held in the right way.



Rejections of the Truth Goal

In spite of these arguments, some philosophers explicitly reject the truth goal as essential to justification and cognitive success. Michael Williams (1991), for example, rejects the idea that truth even could be an epistemic goal when conceived of as “knowledge of the world.” Williams argues that in order for us to have knowledge of the world, there must be a unified set of propositions that constitute knowledge of the world. Yet, given competing uses of terms, vague domains of discourse, the failure of theoretical explanations, and the existence of domains of reality we have yet to encode into a discipline, there is not a single, unified reality to study. Williams argues that because of this, we do not necessarily have knowledge of the world:


All we know for sure is that we have various practices of assessment, perhaps sharing certain formal features. It doesn’t follow that they add up to a surveyable whole, to a genuine totality rather than a more or less loose aggregate. Accordingly, it does not follow that a failure to understand the knowledge of the world with proper generality points automatically to an intellectual lack. (543)


In other words, our knowledge is not knowledge of the world—that is, access to a unified system of true beliefs, as the classical theory would have it. It is knowledge of concepts in theories putatively about the world, constructed using semantic systems that are evaluated in terms of other semantic systems. If this is, in fact, all there is to know, then truth, at least as classically conceived, is not a meaningful goal.


Another philosopher who rejects the true goal is Stephen Stich (1988; 1990). Stich argues that, given the vast amount of disagreement among novices and experts about what counts as justification, and given the many failures of theories of justification to adequately ground our beliefs in anything other than calibration among groups of putative experts, it is simply unreasonable to believe that our beliefs track anything like the truth. Instead, Stich defends pragmatism about justification, that is, justification just is practically a successful belief; thus, the truth cannot play a meaningful role in the concept of justification.

 

A response to both views might be that, in each case, the true goal has not been abandoned but simply redefined or relocated. Correspondence theories of truth take it that propositions are true just in case they express the world as it is. If the world is not expressible propositionally, as Williams seems to suggest, then this type of truth is implausible. Nevertheless, a proposition might be true in virtue of being an implication of a theory, and so, for example, we might adopt a more semantic than the ontological theory of truth, and it is not clear whether Williams would reject this sort of truth as the aim of epistemology.


Similarly, someone might object to Stich’s treating pragmatism as if it is not truth-conductive in any relevant sense. If something is useful, it is true that it is useful, even in the correspondence sense. Even if evidence does not operate in a classical representational manner, the success of beliefs in accomplishing our goals is, nevertheless, a truth goal. (See Kornblith 2001 for an argument along these lines.


Conclusion

Epistemic justification is an evaluative concept about the conditions for right or fitting belief. A plausible theory of epistemic justification must explain how beliefs are justified, the role justification plays in knowledge, and the value of justification. A primary motive behind theories of justification is to solve the dilemma of inferential justification. To do this, one might accept the inferential assumption and argue that justification emerges from a set of coherent beliefs (internalist coherentism) or an infinite set of beliefs (infinitism). Alternatively, one might reject the inferential assumption and argue that justification derives from basic beliefs (internalist foundationalism) or through reliable belief-forming processes (externalist reliabilism). If none of these views is ultimately plausible, one might pursue alternative accounts. For example, virtue epistemology introduces character traits to help avoid problems with these classical theories. Other alternatives include hybrid views, such as Conee and Feldman’s (2008), mentioned above, and Susan Haack’s (1993) foundherentism.



Reference

  1. Aikin, S. 2009. “Don’t Fear the Regress: Cognitive Values and Epistemic Infinitism.” Think, 23, 55-61.
  2. Aikin, S. F. 2011. Epistemology and the Regress Problem. London: Routledge.
  3. Alston, W. P. 1988. “An Internalist Externalism,” Synthese, 74, 265-283.



Share:

Introduction to Basic NFT’s - Qlael Practicalintroduction

practicalintroduction.com



Different about Non-Fungible and Fungible asset 

The definition of a fungible asset states that any asset can be fungible when the units are easily interchangeable with each other. You cannot distinguish different units of one fungible asset. As a matter of fact, every unit of fungible assets has a similar market value and validity. For example, one fifty-dollar bill would be equal to another fifty-dollar bill in terms of value and validity. You can find other examples of fungible assets in precious metals, commodities, cryptocurrencies, fiat currencies, and bonds.


Nonfungible assets are not interchangeable with each other and have unique properties that separate them from each other. Even if NFTs may look similar in some aspects, there are many prominent differences between them. Some of the notable examples of non-fungible items in the real world include concert tickets and artwork. Even if two concert tickets are the same in terms of design, a front-row ticket would have more value than a back-row ticket. Similarly, two paintings may look similar, albeit with certain rare elements differentiating them.


What are NFT’s?

Non-fungible tokens are basically digital assets that feature identifying information documented in smart contracts. NFTs or non-fungible tokens are presently the hottest trends in the domain of blockchain and crypto codes. If you dive deeper into the technicalities of enterprise NFTs, you will discover that they are basically unique digital codes. The unique digital codes depend on the same blockchain technology as cryptocurrencies, such as Ethereum. On the other hand, NFTs are unique and give you the privilege of ownership over digital assets. The existence of NFTs on a blockchain is formidable proof of the NFT blockchain interplay.


Properties of NFTs

  • Uniqueness

The foremost trait on which nonfungible tokens rely is uniqueness. The information in the code of NFTs illustrates the properties of the tokens in detail, thereby differentiating them from others. For example, a digital art item could have information about individual pixels in the code of its NFT.

  • Traceability

The on-chain documentation of transactions for an NFT includes all details of the NFTs. You can easily trace the history of NFTs from the time of their creation till the present time. It is also possible to identify the different times when the non-fungible token changed hands. Therefore, you can easily verify the authenticity of NFTs with the traceability trait.

  • Indivisible

Another important highlight you will notice in the NFT marketplace would refer to indivisibility. You could not purchase half of the artwork, and the same applies to NFTs. You cannot transact with NFTs as fractions of a complete NFT as it is impossible to divide NFTs into smaller denominations.

  • Rarity

Nonfungible tokens also feature scarcity that can improve their attractiveness for buyers. As a result, the assets would be highly desirable alongside ensuring that the supply does not surpass the demand.


practicalintroduction.com


Applications of NFTs

  • Digital Identity

NFTs can serve a vital role in changing how you visit museums, galleries, and landmarks. You can think of ways in which NFTs can be used to verify the authenticity of guests at such events and places.

  • Marketing

NFTs could also find promising use cases in the domain of marketing. Taco Bell has been one of the frontrunners in using non-fungible tokens for marketing purposes.

  • Gaming

The use of NFTs in gaming has also been one of the prominent highlights of the NFT ecosystem. While online games have items that you can buy and sell for money, the items are under the control of the centralized game server. However, the introduction of play-to-earn games has transformed the concept of gaming. Players can participate in the NFT games to earn items, which they can sell on marketplaces at higher prices. Axie Infinity is a glaring example of using NFTs in a game.

  • Digital Art

NFTs are a stepping stone for the development of digital arts. With NFT, artists can create programmable arts. Creators can program their art pieces to change or act differently based on certain conditions. The buyer can verify the authenticity of a digital art before buying it. Creative theft is a major issue, and digital artists can fight against this issue by using NFTs to present their work.(Indrawan Vpp)




Share:

General Relativity Withstands Double Pulsar’s Scrutiny - Qlael Practicalintroduction

Reviewed by : Indrawan Vpp

practicalintroduction.com

Figure 1: The double pulsar is a pair of rotating neutron stars, both of which are pulsars. The light beams from each pulsar (yellow) are shown exiting through a donut-shaped magnetic field (blue). On Earth, we see these beams as flashes at regular intervals. As the two pulsars revolve around each other, gravitational waves are emitted (represented by “ripples” in the underlying spacetime fabric). By monitoring changes in the timing of the flashes, researchers have measured the amount of energy taken away by gravitational waves. This loss matches what Einstein’s general relativity predicts to a level of one part in 10,000.



Neutron stars are the densest celestial objects after black holes. But unlike black holes, some neutron stars emit beams of radiation out of their magnetic poles, producing a “lighthouse” effect as they rotate. By recording the flashes from these so-called pulsars, giant radio telescopes can infer the physical properties of neutron stars. For nearly two decades, Michael Kramer from the Max Planck Institute for Radio Astronomy, Germany, and his collaborators have monitored the double pulsar PSR J0737–3039A/B, a unique system composed of two pulsars in orbit around each other. The team has now released 16 years’ worth of their data. The gravity community has long awaited this update, as an earlier study—based on just 2.5 years of data—showed that the pulsar pair is an excellent testbed for strong-field gravity. The new extended dataset does not disappoint: It not only improves the precision of previous gravity tests by orders of magnitude, but it also enables a few new ones. Einstein’s general relativity passes all these challenges with flying colors.


Astrophysicists refer to neutron stars as superstars because of their superdense matter, superconducting/superfluid interiors, super-fast rotation, and super strong gravity and magnetism. The most relevant super property for gravity tests is that pulsars are superprecise clocks. Because of angular momentum conservation, neutron-star rotation is remarkably stable, which means the ticks of a pulsar clock have a long-term consistency comparable to that of the best atomic clocks on Earth. On top of that, neutron stars are almost as compact as black holes, meaning that their precise ticking occurs in highly curved spacetime. Certain alternative gravity theories predict that neutron-star clocks could display large deviations from general relativity predictions.


When a pulsar is located in a binary, the arrival times of pulses at radio telescopes are modified by the binary’s orbital motion in a characteristic way. Precise measurements of these arrival times over the course of years to decades allow tiny changes in the orbital motion to be detected. For example, observations of the Hulse-Taylor pulsar, the first known stellar binary containing a pulsar, revealed a shrinking in the orbital radius and an acceleration in the revolution rate—evidence that orbital energy was being lost to gravitational-wave radiation. In terms of studying gravitational effects, the double pulsar PSR J0737–3039A/B—detected in 2003—is, in many ways, superior to its binary rivals. First, it has the advantage that it is the only binary where both components are visible as pulsars (Fig. 1). Second, it is relatively nearby—at a distance of about 2000 light-years from Earth. Lastly, its orbital inclination with respect to us is near “edge-on” (about half a degree from 90°), which is fortuitous because the pulsar signals pass through the orbital plane where they can be imprinted by more of the binary’s curved spacetime. Thus, the double pulsar provides a unique window into the strong gravity regime.


The latest dataset reported by Kramer and co-workers was laboriously taken from 2003 to 2016 with six large radio telescopes located in Australia, the US, the Netherlands, France, Germany, and the UK. Combining these data was nontrivial, as the telescopes observe the double pulsar at different frequencies, on different days, and with different sensitivity. In addition, the high precision of the timing measurements required that the researchers take into account many astrophysical “contaminations.” For example, free electrons in the interstellar medium cause a time-varying, a dispersive effect that must be subtracted from the pulsar timing with the help of observations of the same pulsars made at different frequencies. The researchers also had to deal with the motion of the pulsars in the Milky Way with respect to our Solar System, as that changes the apparent ticking of pulsars.


The most significant result from the double-pulsar observations is the test of the quadrupole formula that describes the energy loss due to the emission of gravitational waves in Einstein’s general relativity. As gravitational waves take away orbital energy, the orbit size becomes smaller, and the revolution gets faster. This speedup is observable in a shift to shorter intervals between subsequent approaches to the periastron—the point when the orbiting pulsars are nearest to each other (Fig. 2). The latest observations from Kramer and colleagues deliver precision in the energy loss of 0.013% after correcting for effects from the pulsars’ motion relative to the Sun as well as for an effect from the pulsars’ spindown (rotational slowing caused by energy being lost to electromagnetic radiation). This precision in the gravitational-wave energy loss is much better than that obtained for the Hulse-Taylor pulsar (0.3%) as well as that measured by the LIGO and Virgo Collaboration for a binary neutron star merger (20%).


practicalintroduction.com

Figure 2: Cumulative shift of orbital periastron time for the double pulsar. The data are compared to two gravity theories: Einstein’s theory of relativity, which predicts gravitational-wave emission, and Newton’s theory, which does not. The observations by Kramer and co-workers show exceptional agreement with Einstein’s theory.


Besides the quadrupole-formula test, Kramer and co-workers significantly improved the precision of other gravity tests, such as the test of the Shapiro delay effect, whereby a curved spacetime makes radio signals travel for a longer time. In addition, the team performed relativity tests that have never been performed before in the double pulsar. They have, for example, measured a relativistic deformation of the orbit, a relativistic spin-orbit coupling between the pulsars’ rotations and their orbital motion, and a deflection of radio signals in the curved spacetime of the pulsars. All measurements are beautifully consistent with predictions from a single elegant and profound theory, Einstein’s general relativity.


These results provide empirical guidance for developing theories that go beyond Einstein’s. Some of these alternative gravity theories predict large deviations from general relativity in the strong-field regime. For example, Kramer and colleagues considered two classes of alternative gravity theories that augment Einstein’s general relativity with a massless scalar field. In general, these theories predict a bigger shift in the double pulsar’s periastron time shift than observed. The researchers are therefore able to rule out a large portion of parameter space for these alternative theories.


For now, Einstein’s theory remains unchallenged. However, future tests may eventually find a deviation. Testing gravity in the strong-field regime continues in ongoing experiments that measure gravitational waves and black hole shadows. And, of course, the double pulsar will continue to be monitored, so we can all wait for the next update on its timing evolution.



References

1. J. H. Taylor, “Binary pulsars and relativistic gravity,” Rev. Mod. Phys. 66, 711 (1994).

2. M. Kramer et al., “Strong-field gravity tests with the double pulsar,” Phys. Rev. X 11, 041050 (2021).

3. M. Kramer et al., “Tests of general relativity from timing the double pulsar,” Science 314, 97 (2006).

Share:

Daily Page View

Label

Adobe Acrobat Adobe Photoshop Alexa Rank Andorid Application Art Austria Beginner Belgium Blog Community Blogger BUMN Bupati Gresik Canada chromium tricks Coin Market Cap Computers Cosmology CPNS/PPPK Cryptocurrency Dancing Denmark Doctoral Doker Education Educational Excel Excel Table Features Film Finland font French Gadget Gallery Garuda Emas Germany Google Analytics Google Form Hardware Homestay Hong Kong Ilovepdf Indrawan Vpp Interior Design iphone iphone 13 Pro iphone 13 Pro Max iphone 13 Pro Max gsmarena Ireland Italy Kampung hijau Karang Taruna Karir/Career Keyword Keywords KKPR dan PKKPR LaTeX Linux LKPM Lowongan Kerja BUMN/SWASTA Mac Majalah Tech Material Physics Material Physics. Mechanical Engineering (ME) Microsoft 365 Microsoft Excel Microsoft Office Microsoft PowerPoint Microsoft Word Nero Nero Burning 2021 Networking New Technology NFC online Operating Systems Optic OSS RBA OTOMOTIF PDF Perizinan Berusaha Persetujuan Lingkungan Pinjaman Online Posgraduate Postgraduate PowerPoint Presentations Profil Perusahaan/Company Profil Profil-Perusahan PT Aerotrans Services Indonesia (Garuda Indonesia Group) PT Astra Tol Nusantara (ASTRA Infra) PT Indomarco Prismatama (Indomaret Group) PT Indonesia Epson Industry PT Jakarta Industrial Estate Pulogadung PT Pertamina Training & Consulting (PTC) PT. Indofarma (Persero) PT. Kimia Farma Group Publisher Pyrolysis Quantum Computing Quantum Physics Quiz Renewable Energy Research Scholarship Semi conductor Semiconductor SEO Sertifikat Halal Smartphone Spintronic Study Abroad Super Capasitor Superconductor SWASTA Tata Cara Pendaftaran Surat Izin Usaha Industri (SIUI) Technology Tecnology Thailand The Best Trip & Travel Tutorial United Kingdom United Stated United States VGA card Website Windows 365 Windows Tips Word Document WordPress Zero Waste zodiak

Arsip Blog