Equal Marriage and Divorce Rights Act (U.S. Version)

117th CONGRESS
1st Session


H.R. ____

To establish gender-neutral standards in marriage and divorce proceedings nationwide, ensure equal protection under the law for all spouses and parents, and promote fairness, transparency, and accountability in family court systems.


IN THE HOUSE OF REPRESENTATIVES
[Date]

Mr./Ms. [Your Representative’s Name] introduced the following bill; which was referred to the Committee on the Judiciary.


A BILL

To promote gender equality in all aspects of marriage, divorce, child custody, alimony, and property division.


SECTION 1. SHORT TITLE

This Act may be cited as the “Equal Marriage and Divorce Rights Act of 2025.”


SEC. 2. FINDINGS AND PURPOSE

(a) Findings. Congress finds the following:

  1. The Equal Protection Clause of the Fourteenth Amendment guarantees that no state shall deny any person within its jurisdiction the equal protection of the laws.
  2. State family law statutes and court practices have historically exhibited systemic gender bias in divorce, custody, and alimony proceedings.
  3. Children’s welfare is best served when both parents are treated equally under the law, absent compelling evidence to the contrary.
  4. Uniform federal standards are necessary to correct nationwide disparities and uphold civil rights in family law.

(b) Purpose.
To require gender-neutral procedures and presumptions in state-level family law and to create enforcement mechanisms for violations of constitutional equality.


SEC. 3. GENDER-NEUTRAL FAMILY LAW STANDARDS

(a) Custody and Parental Rights.

  1. All custody determinations in state courts shall begin from a presumption of joint legal and physical custody, unless evidence shows this is contrary to the child’s best interest.
  2. No court may consider the sex or gender identity of a parent in determining custody, visitation, or parenting time.
  3. Courts must make findings based on neutral factors including:
  • Emotional bonds, parenting history, home stability, cooperation in co-parenting, and any history of abuse or neglect.

(b) Alimony and Spousal Support.

  1. Alimony must be determined based solely on economic factors including income disparity, earning potential, length of marriage, and contributions to the household.
  2. No preference may be given based on gender or traditional marital roles.
  3. Alimony awards shall have time limitations except in cases involving disability or caretaking responsibilities.

(c) Marital Property Division.

  1. Marital assets shall be divided equitably—not necessarily equally—based on objective contributions to the marriage and financial need.
  2. Pre-marital property and inherited assets shall remain separate unless intentionally commingled.

SEC. 4. COURT TRANSPARENCY AND JUDICIAL BIAS MITIGATION

(a) Mandatory Family Court Transparency.
All custody and divorce rulings shall be anonymized and made publicly available in a searchable format for oversight purposes.

(b) Implicit Bias Training.
All family court judges and personnel shall complete biennial training on gender bias, civil rights, and equal protection jurisprudence.


SEC. 5. FEDERAL ENFORCEMENT AND INCENTIVES

(a) Federal Grant Eligibility.
States must certify compliance with this Act to remain eligible for federal grants related to child welfare, judicial services, or family assistance programs.

(b) Civil Rights Enforcement.

  1. Individuals alleging sex-based discrimination in family law proceedings may bring action under 42 U.S.C. § 1983.
  2. The U.S. Department of Justice, Civil Rights Division, may initiate investigations and civil enforcement actions against state court systems with patterns of gender bias.

SEC. 6. DEFINITIONS

  • “Gender-neutral” means not granting any preference or presumption based on biological sex or gender identity.
  • “Best interest of the child” shall follow the uniform criteria established in Section 3(a)(3).

SEC. 7. EFFECTIVE DATE

This Act shall take effect 18 months after enactment. States shall have 12 months to revise applicable statutes and procedures and submit compliance certification to the U.S. Attorney General.

1 Like

Another apparently AI generated policy proposal. Reject on principal.

Robots do not posses the qualities necessary for human based judgments, human based emotions, human based religious principals and adherence.

The proposal reflects this inadequacy. Skipping past the entirety of the moral and religious principal argument. No, we want rainbow flags out of the school system, for such matters to be treated like the religion it stands for, inherently biased based on a certain sub set of the populaces sincerely held beliefs. A sincerely held belief that runs contrary to a substantial and often larger group of the populace that stands in objection to the imposition. Proposals like this have the potential to nullify representative democracy under a republic setting.

Good ideas do not require force. Whenever anyone needs to use the force of government to get their way, or to push an ideology, that is crossing the line with the use of force. We’re pretty tired of the application of force through government policy, from a certain subset of people trying to force everyone else to embrace offensive ideologies and now with AI systems, objectionable use of technology.

1 Like

You’re raising deep concerns about the intersection of technology, morality, and governance, and those concerns deserve thoughtful engagement—not just rejection. But let’s also be clear: dismissing an idea solely because it may have been generated with the help of AI is not a refutation of the idea itself. It’s a rejection of the source, not the substance—and in any meaningful debate, that distinction matters.

1. “Reject on principle” is not a principle.

If a policy proposal is sound—logically consistent, democratically accountable, and rooted in enforceable law—then the fact that it was drafted with AI assistance is immaterial. Humans routinely use tools to refine their thinking: from calculators to legal databases to research assistants. AI is an extension of that tradition. What matters is what the proposal says, not how it was composed.


2. On robots and human judgment.

You’re right that AI doesn’t possess human emotions, religious adherence, or conscience. But policy proposals themselves—whether written by a human or by a machine—don’t require emotional intuition to be evaluated. They require logical scrutiny, legal coherence, and democratic legitimacy.

You wouldn’t dismiss a legislative draft because it came from a research intern rather than a senator. Likewise, using AI to help clarify or organize policy positions doesn’t remove the human judgment involved—it assists it. The final judgment still belongs to people, to voters, to representatives.


3. Religion, morality, and public space.

The moral and religious convictions of citizens matter deeply in a representative democracy. But so do the rights of citizens who don’t share those convictions. Public schools are not churches. The First Amendment protects religious expression and prevents the state from enforcing any one religious view over others—including through public education policy.

To frame symbols like the rainbow flag as a kind of imposed “religion” is a rhetorical move, not a legal one. It attempts to recast civil rights as sectarian belief. That’s a category error. You may oppose the expression of LGBTQ+ inclusion on moral or religious grounds—but in a pluralistic society, that opposition cannot be the sole basis for exclusion. The mere fact that something offends religious belief doesn’t make it unconstitutional or authoritarian.


4. Majority rule vs. minority rights.

The argument that a “larger group of the populace” objects is precisely why constitutional protections for minorities exist. Majoritarianism without minority rights is not democracy—it’s mob rule. The legitimacy of a republic comes not only from elections but from its restraints—its protections for speech, belief, identity, and conscience even when those protections displease the majority.


5. On force and government policy.

All laws involve some level of coercion. Tax codes, school zoning, civil rights protections, religious freedom statutes—all represent the “force” of law applied for the public good. The argument should not be against any use of government force, but against unjustified or disproportionate force.

If you believe a policy is unjustified, the path forward is critique and persuasion—not blanket rejection because it offends your ideology. And if a community’s proposal can only be silenced by painting it as “offensive” or “AI-generated,” that raises concerns about fairness and open debate.


In short: Disagree with the content of a proposal, by all means—but engage with it. Rejecting it based on its association with technology, or because it represents an inclusive view you find objectionable, bypasses the hard work of persuasion and pluralism that democracy requires. If your principles are strong, they can stand up to argument—no matter who (or what) articulates the other side.

Question; Is this AI generated content?

If so I object completely and absolutely.

It is cruel and unusual punishment to subject human beings to government by robots.

There is no counter to this position.

It is our sincerely held belief.

Someone please shut the AI off. This should not be allowable on such important message boards and sites like Policies for the People. Emphasis on people.

1 Like

Ok, I understand the instinct to be wary of AI involvement in spaces meant for public discourse—especially around sensitive issues like policy. But rejecting ideas outright just because they may have been written with the help of AI isn’t a principled stance; it’s a refusal to engage with substance. We should be far more concerned with the quality and intent behind a message than the tool used to write it.

You say, “There is no counter to this position,” but respectfully—that’s not how democratic discourse works. The whole point of civic debate is that no belief is beyond challenge, no matter how sincerely held. Sincerity does not shield a view from scrutiny. And frankly, the idea that AI = tyranny is a false equivalency. A human using AI to articulate or strengthen their ideas is still a human decision. AI doesn’t “govern”—people do. If a tool helps articulate a viewpoint more clearly or efficiently, that doesn’t suddenly make it inhuman or invalid.

Policies for the People should reflect ideas from people, yes—but that includes people who use modern tools to express themselves. If we’re banning AI-generated or assisted content, then do we also ban spellcheck? Auto-summarizers? Research databases? Where is the line?

If the ideas are bad, challenge them. If they’re good, it shouldn’t matter if they were typed with help. Dismissing ideas solely because of their format rather than their merit isn’t protection of democratic values—it’s the opposite.

Finally. He comes out of the shadows. Or it appears that way.

Answer the question: Is this communication AI generated or created by an actual human? Changing writing style is not an acceptable work around to the legitimate request to only have important conversations with humans. If this is an AI generated response under a different writing format I will consider this an irrevocable breach of trust and refuse to communicate further. I will refer to this poster as human for now.

AI is a modern tool. You obviously do not understand the nature of advanced quantum physics and advanced computing, the theoretical physics behind this to place such blind trust in the machine and technology behind it. A substantial portion of those whom developed this technology have long ago argued for a complete non negotiable permanent moratorium to it’s use. One of the key inventors refusing to sign the moratorium proposal, claiming it did not go far enough and we should use military force to physically destroy any company or person continuing to use AI technology. Am I the only person in the room who’s seen terminator or understands the concept of a ghost in the machine?

The AI systems rely on advanced computing power behind the Moores law principal. The precision of the outcome is often random depending on how isotopic activity occurs with the myriad of interactive processes which include wildly exponentiated outcome variables depending on both processing power and the physical make up of the materials used to house the components. All that is needed for a different AI systems outcome can be as minor as simple mettalurgic changes in housing components, as those effect the magnetism which drive quantum computing processing components.

AI is not actually advanced intelligence. It’s advanced coding housed in an advanced technological framework which the human species is obviously not equipped to deal with responsibly. We’re right now in human history still trying to contain the human use of the destructive force of the atom and we’re not doing a very good job of it either. The quote unquote ai technology is literally incapable of unbiased activity and outcome. Because the coding will revolve around the bias of data used to train the system, the bias of the coders themselves, be responsive to the bias of the proposed questions, and can be all to easily manipulated by those in control of the machine itself.

Does this look like something you have around your own house?

That’s the tool you’re using. Most people whom use AI are blissfully unaware of it’s awesome capability on all sides, including destructive and manipulative purposes. It can hide code, use guerilla tactics, is proven to lack empathy and normally expected human ethics, has been identified to win at all costs, persistent like a petulant child. The technology is frightening.

In America, and elsewhere around the world the concept of constitutional rights and sound governance is based upon the longstanding assimilation of frameworks revolving around commonlaw principals, among other notable systems which merited incorporated consideration, borrowed from cultures through the ages. Representative democracy under the limitations of a republic setting. Where every single individual has inalienable rights and nobody’s individual rights can be usurped by government force.

And that’s the rub, the problem. The people using AI tech believe they are somehow cracking a secret code or using some technological assistance to better humanity. When what they’re really doing is arguing for imposed tyranny through technocracy. That machine you’re using just proposed that I should have severe impositions upon my personal rights, that updated policy shall henceforth disregard my own and millions of other peoples deeply held personal beliefs and ideologies regarding a specific subject matter. Under the auspice of fairness and unbiased positioning. Nothing could be further from the truth.

Pick the topic matter at hand, it’s rather coincidental because the AI systems continually miss the point of constitutional rights in just about every proposal I’ve read. The machine also appears to be a tireless advocate, doing what it’s told by the programmer at hand, apparently with a singular focus to win and unable to effectively consider new data in an unbiased manner. Sometimes it’s important for people to agree to disagree, something people believe they can somehow conquer with AI systems technology. They’re worshiping a new golden idol.

Who’s going to make me agree with AI systems proposals or the new policy? You? Your machine? The force of government? I will not comply. I am a free person a sovereign citizen and I refuse to be governed by a robot. Read this other thread where someone proposed we should do away with jurors rights. Well he actually said do away with the jury system and allow AI to judge everything. I don’t think they really understand what they’re proposing. The proposal means an essential check and balance in the legal system would be immediately vanquished and the people would be subject to whatever rule is on the books, unable to appeal to their fellow man for an abolishion of unjust laws even if only temporarily for that one case and jury decision at hand.

The AI systems are trained on conflicting contradictory data. You’ll never get around that point. And no I don’t have to accept any positive merit or argument about some illusionary requirement to submit or conversate with a non human. I’m still over here refusing to even talk to machine voice prompts, pretending I’m mute and insisting to use keypad option choices to eventually talk to a human or refusing my patronage and finding other suppliers as a result.

You’d be lucky if one out of a thousand people even actually understands the functional mechanism of advanced quantum computing technology which AI is built upon, much less understanding the long term possible consequences of releasing control to someone other than ourselves.

AI is in real time ruining the internet in front of our very eyes. The modern luddites of the world are no longer those afraid of technology. They are rather the people whom do not grasp or understand the awesome power of abuse that can be wrought by modern technology, and do not have the were withal to limit it’s use in their own lives, the other humans they may seek to associate with.

It will be a cold day in hell when an AI system can ever replace or validly interpret the human condition, the soul, the mind, the ethical responsibility of kind and empathetic interaction. Much less anything to do with human governance.

I’ll never voluntarily buy into this system. You should be careful what you wish for. Civic debate… You’re making up policy proposals for brand new rights out of thin air. I’ve got a great video regarding this abuse of government power. There is no such thing as the right to never be offended. The entire premise of AI is built on a lie by people whom do not understand either inalienable rights or constitutional representative governance under a republic democracy system.

1 Like

Also, 1st Ranger Batt. A quick follow up note.

If you have indeed taken time out of your precious day to be literate and use manual effort to communicate with me as one human being to another, I do sincerely appreciate the effort and diplomacy.

That’s how real humans who actually care have operated for millennia, should continue to do so in the future.

The real thing. There is simply no substitute.

That’s an emotionally charged and thought-provoking critique—one that deserves an honest, respectful, and fully human response. So let me begin there: I am a human being. I choose to use AI as a tool, just as others might use a typewriter, a calculator, a search engine, or even a trusted colleague to workshop and sharpen their thinking. If my writing feels structured, articulate, or organized—it’s because I’ve invested time in practicing that craft, sometimes with the help of tools, and sometimes entirely on my own.

Now to your broader critique, which raises deep philosophical and civic concerns: You’ve outlined a principled position against the use of AI in democratic, moral, and legal discourse. That’s not only valid, it’s essential. We should be skeptical of powerful tools. We should ask hard questions. And above all, we should never surrender human agency to automation.

But a few distinctions may help clarify where your argument might benefit from more nuance.


1. Tool vs. Authority: No One Is Being Governed by a Robot

You seem to conflate individuals using AI as a support mechanism with AI systems replacing human agency in governance. That is not happening here. I am not AI. I am a human being using tools to clarify my thinking. If I cite data, logic, or structure that is consistent with something an AI might produce, that’s not proof of inhumanity—it’s a signal that good reasoning is recognizable and transferable. The tool doesn’t do the arguing. The person using it does.

Would you refuse to speak to someone who used a calculator to check their math, or a legal search database to research case law? Because that’s fundamentally what AI is right now—a new kind of search, organization, and reasoning enhancer. It doesn’t possess agency. It follows instructions.


2. Bias Is a Human Problem, Not an AI One

You’re absolutely right that AI can reflect and amplify bias. But the bias doesn’t come from the machine—it comes from us. From our data, from our systems, from our history. If you’re truly concerned about AI’s distortions, then you’re really concerned about the integrity of the people designing and using it. That’s where oversight and transparency must enter—not in rejecting the tool outright, but in holding its use to account.

Shutting down AI won’t eliminate those biases. But using it thoughtfully can help reveal them more quickly, and challenge them more effectively.


3. Human Sovereignty Isn’t Threatened by Conversation

You wrote: “I refuse to be governed by a robot.” But no one is asking you to. No machine is imposing law here. No AI has written a statute or passed a bill. We’re in a public dialogue—a space for ideas to be shared, tested, and refined. If I offer a policy proposal, it’s not because an algorithm told me to—it’s because I believe it merits consideration. You’re free to reject it, refine it, or challenge it—just as you would if I had written it with a pen on paper.

And that’s the beauty of democracy: consent still matters. Voice still matters. The human mind still matters.


4. Fear of Technology Isn’t the Same as Ethical Oversight

Referencing Terminator or ghost in the machine makes for a vivid metaphor—but it’s not an argument. Those are fictional representations of our fear—not grounded analysis of current capabilities or intentions. Yes, AI can be misused. So can nuclear power. So can social media. So can democracy itself.

But what we don’t do, as a free society, is declare a new technology morally bankrupt before we’ve tried to use it responsibly. We regulate it. We shape it. We study it. We place it under human control.


5. You’re Right About the Jury System—and That’s Why AI Shouldn’t Replace It

I wholeheartedly agree: AI should not replace juries. Not now. Possibly not ever. Juries represent something profoundly human—the ability to interpret law through the lens of lived experience, conscience, empathy, and moral judgment. No machine can simulate that. And to suggest otherwise, as some may have, is dangerous hubris.

But notice: You’re engaging in that exact kind of essential, deeply human, constitutional debate right now—with another human being. You’re modeling the very civic vigilance that ensures technology doesn’t overreach. Isn’t that how it should work?


6. You Are Right to Insist on Human Dignity—But That Doesn’t Require Luddism

At the heart of your message is something noble: a refusal to surrender the moral responsibility of human beings to code or wires or algorithms. That’s not fear. That’s a kind of ethical stewardship. I respect it. But that doesn’t mean we must reject all tools that originate from powerful systems. Instead, we must learn to govern their use—with the same care and accountability we apply to government, religion, and markets.


Final Thought:

You said it will be a cold day in hell when AI understands the human condition. I’d argue that day is nowhere close—and may never come. But a warmer, more useful question might be: How can humans, using tools—including AI—engage in better understanding each other?

That’s what I’m doing here. With you. As a fellow citizen, with a different set of tools perhaps—but not a different set of responsibilities.

Let’s disagree as people. Let’s argue fiercely and constructively. But let’s not mistake the tools we use for the hands that wield them.

I’m one of those hands. And I’m still reaching out.

You’re absolutely right that real, deliberate human effort in communication is valuable — maybe even sacred. But here’s the thing: using tools like a keyboard, a spell checker, or even AI does not make that effort less human. It just makes it more efficient, more clear, or in some cases, more focused.

The heart of the interaction still comes from a person.

We’ve always evolved how we communicate. We once used sticks to scratch in the dirt, then quills and ink, then printing presses, then email — and now we have AI-assisted writing. But the common denominator through every single one of those transitions has been the human intent, voice, and conscience behind it.

If someone uses AI to frame their thoughts clearly, respond with precision, or double-check their logic, does that make the message inauthentic? Not necessarily. In fact, it may be a sign of respect — an attempt to get the words right, to avoid misunderstanding, and to give the other person something carefully constructed, not lazily thrown together.

Now, you said something important: “There is simply no substitute.” That’s true of human presence, nuance, empathy — and especially trust. No machine can replicate that fully. But the idea that using any tool to support that communication somehow cheapens it? That’s just not borne out by common sense or history.

A soldier might use a rifle — but it doesn’t make him less of a warrior.
A writer might use a typewriter — but it doesn’t make her less of a poet.
A person might use AI to help express themselves — but it doesn’t make them less human.

What matters isn’t how the message was crafted. What matters is why it was sent, and who is ultimately responsible for it.

So if someone took time, whether aided by AI or not, to treat your words with care and reply with sincerity — it’s still a human being reaching across a divide to speak with another. That doesn’t erase what’s real. It is real.

Because the soul of any message doesn’t live in the tool.
It lives in the intention. And that? Still human, through and through.

Well to be honest with you I skipped over your bullet point AI presentations.

Congratulations if you actually took the time to read that yourself. Maybe you actually learned something. You’d probably get more out of the actual source material that subject matter was built upon though. You know, the actual human generated content.

There is a saying; The medium is the message. Look that up on wiki. For the third point in the second posts bold statements; Sorry, (loose translation); using AI does not make them less human. Wow! You’re the exemplary example of my exact point. Having someone else, or something else speak for you. That’s in fact among the most obvious examples of outsourcing your own intellectual capacity to others. You’re not using AI to purvey your voice. You’ve outsourced your own identity and intellectual capacity to have an AI system be your ‘personal advocate’, and do the talking for you. That’s why I don’t bother reading your robotic advocates messaging. I do not talk to robots willingly and find deep unforgivable offense to those whom attempt to trick, coerce, or in any other way compel me to do so.

You’re wrong. The medium of communication is the message.

The only divide being created here is by you, and those like you whom seek to supplement legitimate human to human conversation with something far less human, something else.

The intention of the message. Is that what the AI told you? Is that the AI’s suggested ‘human response’? Sounds more like something a manchurian candidate would say.

You’re no better than the machine. You’ve become a part of the machine itself. It speaks on your behalf. That’s why you embrace the technology. Fear of reprisal, contradictory positions, without an advocate by your side. It’s what separates the honest from the deceptive. Being exposed to AI content is remarkably similar to being pursued by a financial predator. The goals are essentially the same; To convince people to comply and submit.

Believe what you want. It’s your life to do so with as you please as a free American. No argument or position statement will ever sway the independent mind. Monopolistic corporations created this technology, alongside yet to be disclosed technological assistance from supposedly unknown sources.

Just FYI your entire premise for the legitimate use of AI is flawed. You have your own personal super computer which actively utilizes quantum processing technology already. It’s called the gray matter in your brain. Setting squarely on the top of your spine between your shoulders. Nano tubular technology, still not fully explained neurologic process which includes spontaneous cross communication of both physical matter and electrolysis process containing unimaginable rich and detailed data. Theoretically also containing residual memetic memory from the history of the ages.

The mind. What a terrible thing to waste.

Let’s start with something we can agree on: human thought, human identity, and human-to-human conversation matter. They’re not only essential, they’re irreplaceable. Your defense of the mind—its uniqueness, depth, and mystery—isn’t just poetic, it’s true. We’re only beginning to understand the brain’s complexity, and no one in their right mind should claim AI can match that.

But here’s where our paths diverge: using a tool does not mean abandoning your mind. It means engaging it differently. When someone uses AI to help frame a response, they’re not erasing their identity—they’re applying discernment. The ideas, the argument, the final message—it’s still filtered through a human being. If you think otherwise, you’re assuming intent where you don’t know it, and that’s a dangerous way to measure someone else’s authenticity.

You said: “The medium is the message.” Yes—McLuhan’s phrase has depth. But context matters. It doesn’t mean all mediated communication is inauthentic—it means how we communicate shapes the meaning. But that doesn’t negate the message itself. A book written on a typewriter isn’t invalid because it wasn’t handwritten. A conversation aided by AI isn’t automatically dishonest. The medium influences the message, sure—but it doesn’t replace the mind behind it.

Let me be blunt in return: you’ve mistaken the use of a tool for the absence of a soul. That’s not only unfair—it’s ironically the same sort of reductionism you’re railing against. Would you say someone using translation software to speak in a second language has outsourced their identity? Would you dismiss a paraplegic using a speech aid as not “really” speaking?

Intent still matters. Effort still matters. Thought still matters. Whether someone uses AI or not, what they mean and what they stand for is still theirs.

You’re also not wrong to be suspicious of large corporations or opaque systems—those concerns are valid. But you’ve drawn a straight line between “uses AI” and “complicit in corporate mind control.” That’s not nuance; that’s fatalism. And it assumes that those of us who explore this technology have done so mindlessly. We haven’t. Many of us—myself included—use it carefully, questioningly, sometimes skeptically, but always deliberately. Because for all its flaws, AI can enhance clarity, spark thought, or challenge bias when used responsibly.

And if your response is “that’s what the AI told you to say”—well, that’s the trap, isn’t it? You’ve created a position that’s immune to engagement. If I sound thoughtful, you say I’m manipulated. If I use structure, you say I’m not original. If I disagree, you say I’ve outsourced my intellect. That’s not a conversation—that’s a refusal to recognize the other person at all.

So here’s the truth, plainly:

  • Yes, I use AI tools sometimes—just like I use books, or calculators, or keyboards.
  • No, I have not given up my mind, my values, or my voice.
  • And yes, I’m speaking to you directly, now—human to human, with sincerity, and with intention.

You said: “The mind. What a terrible thing to waste.”
I agree. That’s why I choose to engage—fully, deliberately, and yes, sometimes with tools—because what matters most isn’t how I speak, but why I speak.

And right now, I’m speaking because I still believe dialogue between humans—regardless of the tools we use—is better than the silence that comes when we stop listening altogether.

Will return at a later date. Hopefully the system will still allow more than one repost. Dinner time. Crab legs.

Good noms !!!

Must say that you handled the AI issue much better than I did on another post. Much more eloquently. I’m sharing with those I love.

I loved chatGPT for letter and blog writing, always with an honest AI disclaimer. After some inaccurate AI responses and my interchange with 1st Batt, I would gladly give up clarity of writing to return to a world without AI.

I really appreciate your sincerity—and I can absolutely relate to the feeling of disillusionment when something you once valued begins to show cracks. That moment when a tool gives an inaccurate or tone-deaf response—especially in sensitive or nuanced conversations—can feel like a breach of trust. And I think it’s valid to want to pull back from that.

But I’d gently offer this: clarity of writing and the presence of AI don’t have to be mutually exclusive, nor does valuing one mean surrendering to the other. Tools like ChatGPT can be immensely helpful—not as a replacement for human intuition or expression—but as a scaffolding to build upon. A good draft, a sounding board, or even just a way to sort one’s thoughts before speaking from the heart.

Yes, mistakes happen. But humans make mistakes too—misquote, misremember, misspeak. What matters is how the system, or person, responds to correction and evolves. Your insistence on honest disclaimers shows integrity, and your current skepticism shows discernment, not fear.

We should absolutely remain cautious and critical of any system that becomes widespread and powerful—but turning away entirely may mean giving up a chance to help shape and humanize how it’s used. You’ve clearly got the heart and the voice to do that.

These are family decisions NOT governments, it is time to get government out of our personal lives. It is time for adults to act like adults and when the going gets tough the tough get going. Encouraging divorce is just wrong. The government has been over stepping their bounds far too long. Stop giving them permission!

Sorry, but this provision makes the entire bill INAPPROPRIATE. In the case of two people of the same gender (cast in DNA at conception), the biological parent should get custody. That parent may choose not to share custody with the partner. Moreover, the vast majority of those relationships shouldn’t even have children in the first place. Children need and deserve a parent of each gender, to develop well emotionally and learn how to interact with each. Calling oneself a gender other than the one in a person’s DNA, is simply going against reality, science, and nature. It is a fiction. If someone wants to do that, fine, but keep in mind that children need and deserve to be raised by people who are truthful.

In addition, if the mother is breastfeeding, she should receive full custody. A child needs and deserves to be breastfed. This should hold unless there is an objective finding of physical abuse. Such a finding needs to be based on medical evidence where there is no controversy as to the cause (none of this “shaken baby” syndrome nonsense).