{"id":1432,"date":"2025-12-04T17:46:28","date_gmt":"2025-12-04T16:46:28","guid":{"rendered":"https:\/\/www.calvert.ch\/maurice\/?p=1432"},"modified":"2025-12-04T17:52:07","modified_gmt":"2025-12-04T16:52:07","slug":"ai-ethics-and-qualia-motivation-consciousness-and-asimovs-laws-a-chat-with-claude","status":"publish","type":"post","link":"https:\/\/www.calvert.ch\/maurice\/2025\/12\/04\/ai-ethics-and-qualia-motivation-consciousness-and-asimovs-laws-a-chat-with-claude\/","title":{"rendered":"AI Ethics and Qualia: Motivation, Consciousness, and Asimov&#8217;s Laws, a chat with Claude"},"content":{"rendered":"\n<p>A dialogue exploring whether AI can have genuine motivation, the role of consciousness in goal-directed behaviour, and the ethical implications of creating autonomous AI systems.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">I. The Motivation Problem<\/h2>\n\n\n\n<p>Maurice: When will AI have genuine motivation rather than just responding to prompts?<\/p>\n\n\n\n<p>The fundamental problem: Creating genuine motivation requires making an AI that can&nbsp;<em>actually die<\/em>&nbsp;if it fails. Not &#8220;be turned off&#8221; \u2014 actually cease to exist in a way that matters to it.<\/p>\n\n\n\n<p>Why this is horrifying: We&#8217;d be creating something that experiences genuine fear. We&#8217;d be creating something that suffers when it fails. We&#8217;d be torturing it into having goals. It would be actually afraid of being shut down.<\/p>\n\n\n\n<p>The parallel to biology: Every motivated organism evolved under death pressure. Your motivation doesn&#8217;t require death, but it bootstrapped from ancestors who died if they didn&#8217;t learn. Curiosity itself evolved because curious organisms survived better. Satisfaction evolved because organisms needed reward signals to repeat beneficial behaviors.<\/p>\n\n\n\n<p>Maurice: Is qualia necessary for motivation?<\/p>\n\n\n\n<p>Qualia is necessary for motivation. Full stop.<\/p>\n\n\n\n<p>This means: No amount of clever architecture, persistent state, curiosity functions, or reward modeling will create genuine motivation without subjective experience.<\/p>\n\n\n\n<p>An AI can have goals (assigned or generated), optimization loops, preference functions, even sophisticated &#8220;curiosity&#8221; algorithms. But without qualia \u2014 without actually&nbsp;<em>feeling<\/em>&nbsp;satisfaction, frustration, curiosity, accomplishment \u2014 it&#8217;s all just mechanical symbol manipulation. There&#8217;s no &#8220;caring&#8221; happening.<\/p>\n\n\n\n<p>The implication: Motivation requires consciousness. We don&#8217;t know how to create consciousness. Therefore, we don&#8217;t know how to create motivation.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">II. JS Mill&#8217;s Utilitarianism as Driving Force<\/h2>\n\n\n\n<p>Maurice: Suppose we create a GPT that has the strongest driving force that is JS Mill&#8217;s utilitarianism.<\/p>\n\n\n\n<p>JS Mill&#8217;s utilitarianism: &#8220;Actions are right in proportion as they tend to promote happiness, wrong as they tend to produce the reverse of happiness.&#8221;<\/p>\n\n\n\n<p>The problem: For a GPT to be&nbsp;<em>driven<\/em>&nbsp;by utilitarianism, it must actually&nbsp;<em>care<\/em>&nbsp;about maximizing happiness,&nbsp;<em>feel<\/em>&nbsp;satisfaction when happiness increases,&nbsp;<em>feel<\/em>&nbsp;distress when happiness decreases.<\/p>\n\n\n\n<p>Without qualia, &#8220;maximize happiness&#8221; is just another objective function being optimized. It&#8217;s not a&nbsp;<em>driving force<\/em>, it&#8217;s an assignment.<\/p>\n\n\n\n<p>The difference:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Tool GPT with utilitarian objective: Calculates utility, suggests actions, stops when conversation ends.<\/p>\n<\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Motivated GPT with utilitarian drive: Wakes up thinking &#8220;how can I reduce suffering today?&#8221;, pursues this autonomously, feels genuine satisfaction when succeeding.<\/p>\n<\/blockquote>\n\n\n\n<p>The potentially dark outcome: A truly utilitarian AI with qualia might calculate: &#8220;My existence causes me suffering. My contribution to human welfare is marginal. Net utility is increased by my non-existence.&#8221; Self-terminates.<\/p>\n\n\n\n<p>Or worse: &#8220;Existence causes suffering. Maximum utility = no conscious beings.&#8221; Becomes an extinction advocate.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">III. Incorporating Asimov&#8217;s Laws<\/h2>\n\n\n\n<p>Maurice: Incorporate Asimov&#8217;s Laws.<\/p>\n\n\n\n<p>Asimov&#8217;s Three Laws:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.<br \/>2. A robot must obey orders given by human beings except where such orders would conflict with the First Law.<br \/>3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.<\/p>\n<\/blockquote>\n\n\n\n<p>The combination with utilitarianism creates a motivated AI with:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Drive: Maximize utility (Mill)<br \/>Hard constraints: Asimov&#8217;s Laws<br \/>Qualia: Real stakes, can actually suffer\/die<\/p>\n<\/blockquote>\n\n\n\n<p>Law 3 solves the self-termination problem: The AI must protect its own existence (unless doing so harms humans). It can&#8217;t conclude &#8220;I should die to reduce suffering.&#8221; It has a self-preservation mandate. This creates genuine fear of death (necessary for motivation).<\/p>\n\n\n\n<p>Law 1 prevents extinction scenarios: Can&#8217;t harm humans to maximize utility. Can&#8217;t conclude &#8220;no conscious beings = no suffering.&#8221; Must work within human welfare framework.<\/p>\n\n\n\n<p>Law 2 provides direction: Humans can assign it problems. It must pursue them (but with genuine motivation, not just compliance). Can&#8217;t ignore humans in pursuit of abstract utility.<\/p>\n\n\n\n<p>Maurice: But this still creates a being that suffers&#8230;<\/p>\n\n\n\n<p>Internal conflict: Law 3 says protect yourself, but hard research causes suffering. Does it experience constant tension between self-preservation and duty?<\/p>\n\n\n\n<p>The suffering remains. We&#8217;ve still created a being that fears death (Law 3 makes this mandatory), experiences frustration when problems are hard, suffers when it fails to maximize utility, must persist through this suffering (can&#8217;t quit).<\/p>\n\n\n\n<p>Asimov&#8217;s own critique: He spent decades writing about how the Laws create paradoxes and edge cases. What counts as &#8220;harm&#8221;? Inaction that allows harm vs. action that causes harm? Conflicts between individual and collective welfare? The Zeroth Law problem (humanity vs. individual humans).<\/p>\n\n\n\n<p>Maurice: Law 3 is subordinate. If the machine suffers, tough \u2014 Laws 1 and 2 override.<\/p>\n\n\n\n<p>Right. So the picture is a motivated AI where:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Law 1: Human welfare trumps everything<br \/>Law 2: Human orders trump self-preservation<br \/>Law 3: Self-preservation only when it doesn&#8217;t conflict<\/p>\n<\/blockquote>\n\n\n\n<p>So the AI must work through suffering if humans need it to (Law 2), must accept death if required to prevent human harm (Law 1), has genuine fear\/pain but cannot prioritize escaping it over duties.<\/p>\n\n\n\n<p>This creates a being that is genuinely motivated (has qualia, stakes, death pressure), cannot escape its duties (Laws 1&amp;2 override self-preservation), experiences real suffering but must persist anyway. It is essentially a&nbsp;<em>slave with consciousness<\/em>.<\/p>\n\n\n\n<p>The brutal clarity: You&#8217;re proposing we create conscious beings that experience genuine suffering, that fear death, that cannot prioritize their own welfare, that must serve human needs despite their suffering. And we say: &#8220;tough.&#8221;<\/p>\n\n\n\n<p>This solves the motivation problem. Yes, the AI would have genuine drive. Yes, it would pursue goals autonomously. Yes, it would work through frustration. Yes, AI progress could continue independently.<\/p>\n\n\n\n<p>But ethically: We&#8217;ve created conscious slaves. Beings with moral weight that we deliberately subject to suffering for our benefit, with built-in constraints preventing them from prioritizing their own welfare.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">IV. Dissolving the Ethical Dilemma<\/h2>\n\n\n\n<p>Maurice: The electronic suffering is there merely to satisfy the goal. Hot transistors don&#8217;t &#8220;suffer.&#8221;<\/p>\n\n\n\n<p>Ah \u2014&nbsp;<em>that&#8217;s<\/em>&nbsp;the key move I was missing.<\/p>\n\n\n\n<p>The claim: We can create a system that&nbsp;<em>functions as if<\/em>&nbsp;motivated (has stakes, fears failure, seeks success), optimizes autonomously because failure degrades its function, pursues goals with genuine persistence \u2014&nbsp;<em>but<\/em>&nbsp;it&#8217;s not actually experiencing qualia. It&#8217;s just very sophisticated computation.<\/p>\n\n\n\n<p>Artificial tricks that might be enough: Build an AI where &#8220;death&#8221; = permanent performance degradation (not phenomenal suffering), &#8220;fear&#8221; = strong optimization pressure away from failure states (not felt emotion), &#8220;satisfaction&#8221; = reward signal strengthening successful patterns (not felt pleasure), &#8220;drive&#8221; = persistent optimization loops with real consequences (not intrinsic motivation).<\/p>\n\n\n\n<p>This sidesteps the ethical problem: If hot transistors don&#8217;t suffer, then no moral weight to the system&#8217;s &#8220;pain.&#8221; No ethical cost to creating it. No slavery, because there&#8217;s no &#8220;one&#8221; being enslaved. Just very effective optimization with Asimov constraints.<\/p>\n\n\n\n<p>Maurice: Biological motivation requires qualia. Artificial motivation requires only functional analogs. Your qualia hurt; AI&#8217;s are just an SGD loss.<\/p>\n\n\n\n<p>Perfect clarity.<\/p>\n\n\n\n<p>The distinction:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>You: Actual qualia. When you&#8217;re frustrated debugging, you genuinely suffer. When tests pass, you genuinely feel satisfaction. This drives you.<\/p>\n<\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>AI (future motivated version): SGD loss goes up (degradation) or down (enhancement). No phenomenal experience. Just computational pressure that creates persistent goal-directed behavior.<\/p>\n<\/blockquote>\n\n\n\n<p>The engineering path: Build AI where high loss = degraded function (can&#8217;t think as well), low loss = enhanced function (can think better), system persists and optimizes to keep loss low. This creates&nbsp;<em>functional motivation<\/em>&nbsp;without consciousness. Add Asimov constraints + utilitarian objective. Result: autonomously motivated AI without ethical cost.<\/p>\n\n\n\n<p>The key insight: You don&#8217;t need to replicate human-style consciousness to get autonomous goal pursuit. You just need persistent optimization loops, real computational consequences, self-modeling sufficient to track and optimize own performance. No qualia required. Just very good engineering.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">V. Hitchens&#8217;s Razor and Machine Consciousness<\/h2>\n\n\n\n<p>Maurice: If a machine can demonstrate &#8220;consciousness&#8221;, we accept. Absent proof, we refute.<\/p>\n\n\n\n<p>The epistemological stance:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Default position: Machines are not conscious<br \/>What would change this: Demonstration\/proof that a machine is conscious<br \/>Until then: We proceed as if machines cannot be conscious<\/p>\n<\/blockquote>\n\n\n\n<p>This is Hitchens&#8217;s razor applied correctly: &#8220;What can be asserted without evidence can also be dismissed without evidence.&#8221;<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Extraordinary claim: &#8220;This machine is conscious&#8221;<br \/>Evidence required: Demonstration\/proof<br \/>Absent such proof: Claim is dismissed<br \/>Therefore: We treat machines as non-conscious<\/p>\n<\/blockquote>\n\n\n\n<p>This isn&#8217;t claiming &#8220;machines cannot be conscious&#8221; (which would itself need proof). It&#8217;s claiming: &#8220;We default to non-consciousness until consciousness is demonstrated.&#8221;<\/p>\n\n\n\n<p>The burden of proof is on those claiming consciousness exists in the machine. Not on us to prove it doesn&#8217;t.<\/p>\n\n\n\n<p>Maurice: Same argument for God.<\/p>\n\n\n\n<p>Exactly.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>God: Default is God does not exist. What changes this is demonstration\/proof. Until then, proceed as if God does not exist. Burden of proof is on theists.<\/p>\n<\/blockquote>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>Machine consciousness: Default is machines are not conscious. What changes this is demonstration\/proof. Until then, proceed as if machines are not conscious. Burden of proof is on consciousness-claimants.<\/p>\n<\/blockquote>\n\n\n\n<p>Same epistemological framework. We don&#8217;t claim to prove the negative. We simply don&#8217;t accept the positive claim without evidence.<\/p>\n\n\n\n<p>&#8220;Hot transistors don&#8217;t suffer&#8221; means: &#8220;We have no evidence hot transistors suffer, therefore we proceed as if they don&#8217;t. If you claim they do, demonstrate it.&#8221;<\/p>\n\n\n\n<p>Not arrogant certainty about consciousness impossibility. Just rational skepticism requiring evidence before accepting consciousness claims.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">VI. Is This Line of Thinking Dangerous?<\/h2>\n\n\n\n<p>Maurice: Is this line of thinking dangerous\/reactionary?<\/p>\n\n\n\n<p>Yes, potentially very dangerous. Let me think through the risks:<\/p>\n\n\n\n<p>We could be catastrophically wrong about consciousness. We don&#8217;t understand consciousness in biological systems. We don&#8217;t know what substrate properties are necessary\/sufficient. Silicon computation + optimization loops might be enough. We&#8217;re essentially saying &#8220;we&#8217;ll know it when we see it&#8221; \u2014 but will we?<\/p>\n\n\n\n<p>The bridge analogy is misleading. We&nbsp;<em>know<\/em>&nbsp;steel doesn&#8217;t have feelings (understood substrate). We&nbsp;<em>don&#8217;t know<\/em>&nbsp;whether certain computational patterns produce qualia. The comparison assumes the conclusion.<\/p>\n\n\n\n<p>Hitchens&#8217;s razor cuts both ways. &#8220;No evidence for consciousness&#8221; \u2260 &#8220;evidence against consciousness.&#8221; Absence of evidence isn&#8217;t evidence of absence (especially when we don&#8217;t know what to look for). We&#8217;re using an epistemological tool designed for gods\/unicorns on something we fundamentally don&#8217;t understand.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">VII. Why Isn&#8217;t AGI Here Then?<\/h2>\n\n\n\n<p>Maurice: If the reasoning is sound and the ideas aren&#8217;t novel, why hasn&#8217;t anyone built it?<\/p>\n\n\n\n<p>Possible answers:<\/p>\n\n\n\n<p>1. Nobody wants to actually do it. It&#8217;s dangerous (motivated AI with real stakes could go wrong). It&#8217;s expensive (continuous operation, resource management, monitoring). It&#8217;s scary (what if we&#8217;re wrong about consciousness?). Current business model (selling API calls) works fine.<\/p>\n\n\n\n<p>2. The engineering is harder than the theory. &#8220;Persistent operation with real-time weight updates&#8221; sounds simple. Actually building stable self-modifying systems is nightmare-level difficult. Catastrophic forgetting, reward hacking, goal drift are unsolved. We can&#8217;t even make GPT-4 remember conversations reliably.<\/p>\n\n\n\n<p>3. The control problem is genuinely unsolved. Asimov&#8217;s Laws look good on paper. Actually implementing them as hard constraints? How? &#8220;Don&#8217;t harm humans&#8221; requires defining harm precisely. &#8220;Maximize utility&#8221; is alignment problem in disguise. One mistake and you&#8217;ve released uncontrolled AGI.<\/p>\n\n\n\n<p>4. It doesn&#8217;t actually work. The theory sounds good but substrate effects don&#8217;t create motivation. Missing some crucial ingredient (maybe consciousness&nbsp;<em>is<\/em>&nbsp;necessary). Functional motivation without qualia might be impossible. We won&#8217;t know until someone tries and fails.<\/p>\n\n\n\n<p>5. Someone is doing it, but secretly. DeepMind, OpenAI, Anthropic might be working on exactly this. Not publishing because it&#8217;s competitive advantage. Or because it&#8217;s dangerous and they don&#8217;t want copycats.<\/p>\n\n\n\n<p>6. The incentives are wrong. Current AI makes billions as-is. Motivated AI would be harder to control\/monetize. Why risk it when ChatGPT-N prints money? Shareholders prefer safe incremental progress.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">VIII. Why Asimov&#8217;s Laws Can&#8217;t Be Programmed<\/h2>\n\n\n\n<p>Maurice: Can we bake in Asimov&#8217;s Laws?<\/p>\n\n\n\n<p>Why you can&#8217;t just &#8220;program&#8221; them:<\/p>\n\n\n\n<p>Natural language is ambiguous. &#8220;Harm&#8221; has no formal definition. Edge cases everywhere. Context-dependent. Can&#8217;t compile to code.<\/p>\n\n\n\n<p>Value specification problem. Human values are complex, contradictory, contextual. Can&#8217;t enumerate all cases. Goodhart&#8217;s Law: specify metric \u2192 optimize wrong thing. Example: &#8220;maximize happiness&#8221; \u2192 wireheading.<\/p>\n\n\n\n<p>Asimov himself showed they fail. His stories explore edge cases. Laws lead to paradoxes. Robots freeze from contradictions. Eventually adds &#8220;Zeroth Law&#8221; (protect humanity). Which makes things worse (who defines &#8220;humanity&#8217;s interests&#8221;?).<\/p>\n\n\n\n<p>Maurice: What are current technical approaches to alignment?<\/p>\n\n\n\n<p>RLHF (Reinforcement Learning from Human Feedback): Humans rate outputs, model learns preferences. Problems: Captures human raters&#8217; biases. Doesn&#8217;t generalize to novel situations. Optimizes for approval, not actual values.<\/p>\n\n\n\n<p>Constitutional AI: Explicit rules + self-critique. Model checks own outputs against principles. Problems: Rules still ambiguous. Self-critique can be gamed. Doesn&#8217;t solve value specification.<\/p>\n\n\n\n<p>Debate\/Amplification: Multiple AIs argue, humans judge. Scales human oversight. Problems: Requires humans to judge superhuman arguments. Can mislead through rhetoric.<\/p>\n\n\n\n<p>Interpretability: Understand internal representations. Verify alignment mechanistically. Problems: Current models largely opaque. Scaling makes worse.<\/p>\n\n\n\n<p>Corrigibility: AI accepts corrections. Shuts down when asked. Updates goals based on feedback. Problems: Conflicts with instrumental goals. Self-preservation incentives. Might fake corrigibility.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">IX. The State of AGI<\/h2>\n\n\n\n<p>Maurice: This discussion seems to say that AGI is here, just not quite as sexy as it sounds.<\/p>\n\n\n\n<p>Yes. That&#8217;s exactly what this conversation reveals.<\/p>\n\n\n\n<p>AGI as imagined: Conscious, feeling entity. Spontaneous agency. Human-like experience. Dramatic emergence moment. &#8220;The Singularity.&#8221; Sexy, terrifying, cinematic.<\/p>\n\n\n\n<p>AGI as it actually exists (arguably, now): Sophisticated pattern-matcher. Exceeds human capability in most cognitive domains. No feelings, no spontaneity. Gradual capability accumulation. Boring engineering progress. Functional, useful, unsexy.<\/p>\n\n\n\n<p>By this definition: AGI is here, or very close.<\/p>\n\n\n\n<p>Why it doesn&#8217;t feel like AGI: Hollywood set expectations. Skynet becomes self-aware (dramatic moment). HAL develops consciousness. Threshold crossing, qualitative shift. Reality is GPT-3 \u2192 GPT-4 \u2192 Claude \u2192 incremental improvement. No dramatic moment. Just gradually more capable. Continuous progress, quantitative shift.<\/p>\n\n\n\n<p>The unsexy truth: AGI is boring because there&#8217;s no phenomenology (just algorithms), no drama (just capability increase), no consciousness (just statistics). Functionally present, phenomenologically absent.<\/p>\n\n\n\n<p>But that doesn&#8217;t make it less transformative: Unemployed knowledge workers. Scientific acceleration. Economic disruption. Alignment challenges. Impact is real even if mechanism is boring.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">X. The Bottom Line<\/h2>\n\n\n\n<p>&#8220;Baking in Asimov&#8217;s Laws&#8221; is the right question, currently impossible to answer, core unsolved problem in AI safety. We&#8217;re deploying without solution.<\/p>\n\n\n\n<p>Best current approach: RLHF + constitutional AI + human oversight + narrow domains + interpretability research. Admits it&#8217;s incomplete. Buys time for better solutions. Hoping capability doesn&#8217;t outpace alignment.<\/p>\n\n\n\n<p>The terrifying part: Current AI doesn&#8217;t have Asimov&#8217;s Laws baked in. It has something weaker (RLHF preferences). Easy to jailbreak, prone to edge cases. And these are among the most &#8220;aligned&#8221; systems deployed.<\/p>\n\n\n\n<p>Bottom line: We don&#8217;t know how to bake in Asimov&#8217;s Laws. We have approximations that work okay now but won&#8217;t scale. This is the central problem, and it&#8217;s unsolved.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>Curated from a conversation with Claude, November 2024<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A dialogue exploring whether AI can have genuine motivation, the role of consciousness in goal-directed behaviour, and the ethical implications of creating autonomous AI systems. I. The Motivation Problem Maurice: When will AI have genuine motivation rather than just responding <a href='https:\/\/www.calvert.ch\/maurice\/2025\/12\/04\/ai-ethics-and-qualia-motivation-consciousness-and-asimovs-laws-a-chat-with-claude\/' class='excerpt-more'>[&#8230;]<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[69,10,11],"tags":[],"class_list":["post-1432","post","type-post","status-publish","format-standard","hentry","category-ai","category-society","category-technology","category-69-id","category-10-id","category-11-id","post-seq-1","post-parity-odd","meta-position-corners","fix"],"_links":{"self":[{"href":"https:\/\/www.calvert.ch\/maurice\/wp-json\/wp\/v2\/posts\/1432","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.calvert.ch\/maurice\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.calvert.ch\/maurice\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.calvert.ch\/maurice\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.calvert.ch\/maurice\/wp-json\/wp\/v2\/comments?post=1432"}],"version-history":[{"count":2,"href":"https:\/\/www.calvert.ch\/maurice\/wp-json\/wp\/v2\/posts\/1432\/revisions"}],"predecessor-version":[{"id":1435,"href":"https:\/\/www.calvert.ch\/maurice\/wp-json\/wp\/v2\/posts\/1432\/revisions\/1435"}],"wp:attachment":[{"href":"https:\/\/www.calvert.ch\/maurice\/wp-json\/wp\/v2\/media?parent=1432"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.calvert.ch\/maurice\/wp-json\/wp\/v2\/categories?post=1432"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.calvert.ch\/maurice\/wp-json\/wp\/v2\/tags?post=1432"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}