AI Disclosure
Last updated: April 2026
Transparency about how AI is used in our services is something we take seriously. This page explains the AI technology we rely on, the nature and limitations of LYRA (our AI Workflow Advisor), how your data is handled when you interact with AI-powered features, and what responsible use looks like from our side and yours.
This disclosure is provided for informational purposes and does not constitute legal advice. Please consult a qualified professional for advice specific to your circumstances.
1. AI Technology We Use
LaunchYourAI’s services — in particular, LYRA — are powered by large language model (LLM) technology provided by Anthropic through their commercial API. Anthropic develops Claude, a family of AI assistants designed for safety and helpfulness. When you interact with LYRA, your inputs are processed by Anthropic’s AI models and returned through our platform interface.
We access Anthropic’s API under commercial terms that include data use commitments. We do not build or own the underlying AI model. Our role is to configure how the model behaves — the instructions, constraints, tone, and focus areas — and to present its outputs in a way that is useful for our clients. The intelligence behind LYRA’s responses is Anthropic’s; the workflow focus, framing, and guardrails are ours.
2. What LYRA Is — and What She Is Not
LYRA is an AI assistant, not a human
LYRA is not a person. She does not have lived experience, professional credentials, or contextual judgment built from years of working in any specific industry. She is an AI system designed to ask good questions, surface patterns in what you share, and provide general workflow guidance based on your inputs. When you talk to LYRA, you are interacting with a software system — a very capable one, but a software system nonetheless.
At no point during an LYRA conversation are you speaking with a LaunchYourAI team member in real time. LYRA is clearly presented as an AI assistant across our platform. If a human follow-up is warranted, LYRA will direct you to book a diagnostic or contact our team directly.
LYRA may make mistakes
Like all current AI systems, LYRA can produce outputs that are inaccurate, incomplete, or misleading. This is not a rare edge case — it is a documented property of large language models. LYRA may:
- Misunderstand the context or nuance of what you share
- Produce suggestions that sound plausible but do not fit your specific business situation
- Fail to flag important factors that a human expert would notice
- Give inconsistent answers if asked the same question different ways
- Express confidence even when the underlying output is uncertain
These are not bugs we can fully eliminate — they are inherent to how current AI models work. We try to mitigate them through careful configuration, clear framing, and by positioning LYRA as a starting point rather than a final answer.
LYRA provides general workflow guidance only
LYRA is designed to help you think through business workflow and operational friction. She is built to ask questions, surface patterns, and point toward potential AI opportunities. That is the scope of her purpose.
LYRA is not a substitute for professional advice. Specifically:
- Not legal advice: Nothing LYRA says should be taken as legal guidance, legal analysis, or legal opinion. Do not use LYRA to navigate compliance requirements, contracts, employment matters, or any situation with legal stakes without consulting a licensed attorney.
- Not medical advice: LYRA does not provide health, clinical, or medical guidance of any kind.
- Not tax or financial advice: LYRA does not provide tax, accounting, investment, or financial planning guidance. For these matters, consult a qualified CPA, financial advisor, or tax professional.
- Not HR or employment advice: LYRA does not provide guidance on employment law, hiring practices, termination, or workplace compliance.
- Not a substitute for domain expertise: LYRA lacks the deep professional knowledge that comes from years of practice in any specific field. Her suggestions are general and exploratory, not authoritative.
3. Limitations of AI-Powered Services
Beyond LYRA specifically, our broader services involve AI at multiple stages — in the diagnostic process, in identifying workflow opportunities, and in the tools we may build. The following limitations apply across all AI-powered components of our work:
AI does not know what it does not know
AI systems can produce outputs with apparent confidence regardless of whether the underlying information is reliable. An AI response that sounds authoritative may be partially or entirely wrong. Always apply your own judgment, consult relevant professionals, and test AI outputs in low-stakes contexts before relying on them for consequential decisions.
AI outcomes depend heavily on context
The quality of AI-generated guidance is highly dependent on the quality and completeness of the information provided. LYRA can only work with what you share. Incomplete or imprecise inputs often lead to incomplete or imprecise suggestions.
AI is not a guarantee of results
Implementing AI tools, automations, or workflows does not guarantee specific business outcomes. Time savings, productivity gains, and operational improvements depend on how solutions are implemented, adopted, and maintained. See our Terms of Use for our full disclaimer on outcomes.
AI evolves rapidly
The AI landscape is changing quickly. The tools, capabilities, and best practices we describe today may look different in six or twelve months. We aim to keep our services current, but we cannot guarantee that any AI tool or workflow will remain optimal over time without review and updating.
4. Data Handling in AI Interactions
When you interact with LYRA, your conversation input is sent to Anthropic’s API for processing. Here is what that means in practice:
Third-party AI processing
Your inputs — the words you type in an LYRA conversation — leave our servers and are transmitted to Anthropic’s infrastructure for processing. Anthropic is a well-established AI safety company operating under commercial API terms that govern how they handle API data. As of the time of this writing, Anthropic’s commercial API terms include provisions restricting the use of API data for model training without customer consent. This means your conversations through our platform should not be used to train Anthropic’s models.
We recommend reviewing Anthropic’s Privacy Policy directly to understand their current data practices.
Memory isolation between users
LYRA sessions are isolated. Your conversation is not shared with, visible to, or used in the context of any other user’s session. We do not cross-reference user conversations. Each user’s LYRA interaction is treated as a discrete, contained session.
No cross-user information sharing
LYRA does not learn from your conversation in a way that benefits or exposes information to other users. There is no shared memory pool across users. If LYRA seems to know something about your industry, that knowledge comes from the AI model’s pre-training data — not from conversations with other businesses on our platform.
Session persistence
Unless a specific persistent memory feature is explicitly offered and opted into, LYRA does not remember your conversation after your session ends. If you return to LYRA in a new session, she will not have context from previous conversations. Each session starts fresh.
5. Responsible Use Expectations
We ask that all users engage with LYRA and our AI-powered services in good faith and for the purposes they are designed for. Specifically, we expect users to:
- Not attempt to manipulate LYRA into providing responses outside her intended scope (e.g., generating harmful content, bypassing safety guardrails, or roleplaying as a different AI system)
- Not share sensitive personal data belonging to third parties — clients, employees, or patients — through LYRA without appropriate authorization and a clear need to do so
- Not attempt to extract, reproduce, or reverse-engineer LYRA’s system prompts, configuration, or operational instructions
- Treat LYRA’s outputs as a starting point for thinking, not as authoritative guidance requiring no further review
- Apply appropriate professional judgment before acting on any AI-generated suggestion
See our Acceptable Use Policy for the full scope of use restrictions that apply to our platform.
6. Contact for AI Concerns
If you have experienced an LYRA interaction that produced harmful, inaccurate, or inappropriate content, or if you have concerns about how AI is being used in our services, please contact us:
LaunchYourAI — AI Concerns
Email: legal@launchyourai.com
Visit: Legal & Privacy Contact
We take AI safety and responsible deployment seriously. Feedback from users helps us improve.