Scope
This policy applies to:
- All AI-powered product features used within the Sense platform (e.g., job matching, content generation, conversational agents).
- All third-party AI systems integrated into Sense’s workflows.
- Internal uses of AI for business operations (sales, support, marketing).
Excluded: Purely rule-based automation (e.g., simple if-then email triggers) and manual workflows not involving machine learning, deep learning, or large language models..
Policy Statement
At Sense, we believe AI should enhance human decision-making, not replace it. As a company that builds intelligent tools for talent engagement, we recognize our responsibility to ensure AI is:
- Fair – Designed to avoid bias and discrimination
- Transparent – Explainable to users and candidates
- Private & Secure – Respectful of customer and candidate data
- Accountable – Governed by rigorous standards and human oversight
- Compliant – Aligned with global AI and employment laws “as applicable”
This policy governs how we design, deploy, monitor, and continuously improve our AI systems — grounded in our ethical values and industry-leading governance.
How We Use AI
We embed AI across our platform to improve hiring experiences and operational efficiency.
Capability |
Description |
Candidate Matching |
AI recommends top-fit profiles based on an analysis of skills, experience, location, and other job-related criteria against the requirements of a specific role. |
Generative Tools |
Job descriptions, screening questions, and outreach messages created with LLMs |
Conversational AI |
Chatbots and voicebots for candidate interactions and FAQs |
AI Insights |
Spam filtering, skills extraction, and analysis of non-personally identifiable engagement patterns to help prioritize outreach. |
Internal Efficiency |
Support routing, content drafting, internal analytics |
All outputs are subject to human review, editability, and override.
Responsible AI Principles
We follow six core principles:
Human Oversight
All decisions with employment impact (e.g., hiring, rejection) are made by people. AI is a decision support system — not a decision maker.
Fairness
We do not use or infer protected attributes (e.g., race, gender, age). We audit models regularly to detect and reduce bias — both internally and via independent third parties.
Transparency
We inform users when AI is involved. Match scores are explainable. Generative content is marked and modifiable. Clients can request model cards and rationale behind outputs.
Privacy & Data Control
Customer data is encrypted, access-controlled, and only used in AI models with consent. Sensitive information is anonymized or excluded where feasible. Clients may opt out of training contributions.
Accountability
Each model has an owner. Every use case follows a lifecycle:
Map → Measure → Manage → Monitor. We maintain logs, test results, incident histories, and retraining plans for all key systems.
Risk-Based Classification
Tier |
Description |
Example |
Oversight |
High |
AI influences hiring |
Candidate-job matching |
Bias audit, legal sign-off, human-in-loop |
Medium |
AI supports screening |
Chatbots scoring assistant |
Disclosure, monitoring, fallback logic |
Low |
Content assist |
AI-written job descriptions/emails |
Editable output, internal QA |
User & Customer Controls
We prioritize visibility and choice:
- AI content is always editable by humans
- Match scoring includes explainable reasons
- Customers may opt out of data training
- Candidates are notified when engaging with AI and may opt out of engaging with automated systems
- Feedback loops allow users to report errors or concerns
- Escalation paths ensure human intervention is always available
Acceptable Use
Misuse of Sense AI features is strictly prohibited, including:
- Inferring race, gender, religion, disability, or similar attributes
- Generating harmful, misleading, political, or offensive content
- Plagiarism or misuse of AI-generated content in academic/regulated contexts
- Reverse engineering Sense’s proprietary AI systems
Note: Violations may result in service suspension or legal action.
Policy Maintenance
This policy is reviewed Annually or earlier if:
- New AI features are launched
- Global regulations are updated
- Risk assessments or customer feedback warrant revision
We maintain a version history and compliance mapping to major standards. Public summaries and model documentation are available upon client request.
Contact
For questions, audit documentation, or AI-related concerns, contact: 📧 ai-governance@sensehq.com