Current Issue

Volume 19, Issue 4 (2025)Read More

Current Articles

Journal Article24 June 2025

The Measure of a Man: Considering Science Fiction and Christianity in the Regulation of Artificial Intelligence Models

Does artificial intelligence “think,” and if it does, what should the law do about it? This article examines popular culture and the Turing Test to assess whether artificial intelligence “thinks,” applying the author’s engineering background and his Christian worldview. The author concludes that artificial intelligence mimics rather than creates and considers the risks and benefits of artificial intelligence given that conclusion. The author suggests that legal regulation should be measured, leaving important decisions to humans while at the same time encouraging development of this important, misunderstood technology.
Journal Article24 June 2025

Is AI a Horse or a Zebra: Do AI Free Speech Concerns Require New Legal Tools?

In 1996, as the internet was emerging into more popular use among the general public, Judge Frank Easterbrook published Cyberspace and the Law of the Horse. In this essay and lecture, Judge Easterbrook warned against the rise of specialized law for general purpose technology and instead posited that many common law legal principles will be able to evolve. As this essay and lecture posit, new technologies provide may provide an opportunity to examine if existing legal principles function optimally, but we should be cautious of interventions in the name of protecting the public from new and rapidly evolving technologies. Nearly thirty years later, we are grappling with similar questions as artificial intelligence (“AI”) gains more awareness and adoption by the general public. Some have questioned if our existing free speech frameworks and norms can adapt to the challenges of generative AI. This Article examines how most of the concerns surrounding AI and free speech simply rehash the long standing debates Judge Easterbrook discussed all those years ago. Further, this Article argues that Judge Easterbrook’s conclusions about law in the early days of the internet: these concerns can and should be resolved using existing frameworks. As such, the creation of specialized law should be limited to only those scenarios where these concerns raise truly novel questions or reveal that the existing frameworks prevent citizens from fully using AI to exercise their freedom of expression.
Journal Article24 June 2025

Open-Source; Open-Season; Open-Fire: Google v. Oracle and the Vulnerability of Code to Copyright Infringement by AI Harvesting

In Google v. Oracle, the Supreme Court was forced to decide if Google’s copying of 11,500 lines of computer code from Oracle without permission constituted copyright infringement. Much was on the line, including precedent concerning the copyright status of millions of lines of code nation-wide. In the lengthy decision, the Supreme Court avoided the central issue of holding whether the copied “declaring code” could be protected by copyright or instead was a functional tool outside of the Copyright Act. Instead, it punted the issue, assuming for the sake of argument that the declaring code that was taken was in fact copyrightable material. This forced the Supreme Court to review Google’s copying under a skewed fair use analysis and find that Google’s copying was a fair use. In the end, the Supreme Court came to a narrow decision: Google had done nothing wrong, and it did not owe Oracle anything. While its holding was intended to be narrow, the Court did not anticipate or account for the parabolic rise of generative artificial intelligence (“AI”) and the potential misuse of its dicta against computer code copyright holders. The lack of guidance on the copyrightability of code leaves intellectual property law in a period of purgatory as generative AI picks the pocket of protectable software left and right.
Journal Article24 June 2025

Silicon Sentinels: Using Whistleblower Protections to Manage Information Asymmetry and AI Risk

In the rapidly evolving landscape of artificial intelligence (“AI”) development, policymakers face a critical challenge: obtaining accurate and timely information about the potential risks and impacts of advanced AI systems. This Article examines the pivotal role of whistleblower protections as a mechanism to address the information asymmetry between AI companies and government officials. Employees inside AI companies are uniquely positioned to share information that can help outside regulators make wise policy decisions, but employees might be reluctant to do so unless their decision to share that information is legally protected. We propose a comprehensive framework for AI whistleblower protections as a critical strategy for ensuring public safety, technological accountability, and informed policymaking in the AI sector. The proposed approach recognizes the unique challenges of regulating emerging technologies, offering a multi-faceted strategy that combines judicial and administrative remedies. Whistleblower protections are presented not merely as a reactive measure, but as a proactive tool for facilitating essential insights into potential technological risks. The framework addresses key implementation challenges, including robust reporting mechanisms, comprehensive employee education, expanded regulatory oversight, and meaningful financial incentives for disclosure. This analysis contributes to the ongoing dialogue about effective AI governance by demonstrating how whistleblower protections can empower employees to raise important concerns, bridge critical information gaps, and ultimately serve the broader public interest in understanding and mitigating potential technological risks.
Journal Article24 June 2025

Christianity, Conception, and Consciousness: Why a Conscious Human Mind Is Necessary to Fulfill the Conception Requirement

As artificial intelligence (“AI”) advances, it not only affects our daily life, but also implicates patent law. The Federal Circuit Court of Appeals has already held in Thaler v. Vidal that only a natural human person can be an “inventor” entitled to receive a patent, thereby excluding AI. The rationale in Thaler centers on statutory interpretation, leaving open the question of whether AI is capable of fulfilling the conception requirement—an essential element of qualifying as an inventor and receiving a patent. This Comment aims to expand the rationale of Thaler and argues that AI cannot fulfill the conception requirement; thus, it should not be granted inventor status. This conclusion is reached through a Christian worldview analysis of what it means to be human and why the conscious human mind is unique. Conception historically required the contribution of a human being’s mental capacity and today is defined as the act of a human mind. Humankind is unique amongst creation because it was created by God in His image and with a special purpose: human beings are to have dominion over and subdue the earth. We accomplish this purpose in part through our role as God’s sub-creators. While God can create from nothing, humans can create using only the natural resources that God created. This God-man relationship is reflected in the man-AI relationship. AI is created by humans in the image of man—it possesses the limited capacity to perform activities traditionally requiring human intelligence—and can also (to an extent) sub-create. Transcending these concepts is the purpose of all of creation: to glorify God. To do so, creation must remain in its proper position in the Biblical hierarchy. In this hierarchy, AI, a creation of man, must fall at the end. To place it in a position equal to man by giving it the right to obtain a patent would undermine this hierarchy and violate a Christian worldview. Further, even if AI could have a conscious human mind, AI is not tethered to an objective moral code, unlike human beings who are tied to the objective law of nature and nature’s God. Because of this, granting AI inventor status would have unintended moral consequences. Along with moral obligations, God placed limitations on humankind that justify humankind placing limitations on AI. One of these limitations must be barring AI from qualifying as an inventor under patent law by explicitly requiring that the act of a human mind alone can fulfill the conception requirement. Finally, God gifted humans the unique ability to be conscious, a fundamental element of the conception requirement. There are three generally accepted theories of consciousness and the human mind. Within both a Christian worldview and materialistic worldview, AI is not conscious and can never be under each of the three theories. Thus, the act of a natural, conscious human mind should be an explicit element of fulfilling the conception requirement, barring AI.

Most Popular Articles