Data Engineering
100 skills
moai-alfred-workflow-core
modu-ai/moai-adkCore 4-step Alfred workflow execution system with intent clarification, task planning, progress tracking, and quality gates. Essential for systematic development with transparency and traceability. Use when executing multi-step tasks, planning complex features, or ensuring quality standards.
grey-haven-deployment-cloudflare
greyhaven-ai/claude-code-configDeploy TanStack Start applications to Cloudflare Workers/Pages with GitHub Actions, Doppler, Wrangler, database migrations, and rollback procedures. Use when deploying Grey Haven applications.
n8n-workflow-patterns
czlonkowski/n8n-skillsProven workflow architectural patterns from real n8n workflows. Use when building new workflows, designing workflow structure, choosing workflow patterns, planning workflow architecture, or asking about webhook processing, HTTP API integration, database operations, AI agent workflows, or scheduled tasks.
workflow-orchestration-patterns
seth-schultz/orchestr8Expertise in autonomous workflow design patterns including multi-phase execution, quality gates, agent coordination, and success criteria definition. Activate when designing or creating workflow slash commands. Guides multi-phase workflow design with checkpoints and quality gates, ensuring workflows are autonomous, reliable, and production-ready.
performance-monitoring
Dmccarty30/Journeyman-JobsTracks agent performance metrics, identifies bottlenecks, detects degradation patterns, generates performance reports. Monitors completion rates, response times, success rates, and resource utilization for Journeyman Jobs development optimization.
sql-optimization-patterns
wshobson/agentsMaster SQL query optimization, indexing strategies, and EXPLAIN analysis to dramatically improve database performance and eliminate slow queries. Use when debugging slow queries, designing database schemas, or optimizing application performance.
polars
K-Dense-AI/claude-scientific-skills"Fast DataFrame library (Apache Arrow). Select, filter, group_by, joins, lazy evaluation, CSV/Parquet I/O, expression API, for high-performance data analysis workflows."
vaex
K-Dense-AI/claude-scientific-skillsUse this skill for processing and analyzing large tabular datasets (billions of rows) that exceed available RAM. Vaex excels at out-of-core DataFrame operations, lazy evaluation, fast aggregations, efficient visualization of big data, and machine learning on large datasets. Apply when users need to work with large CSV/HDF5/Arrow/Parquet files, perform fast statistics on massive datasets, create visualizations of big data, or build ML pipelines that don't fit in memory.
excel-pivot-wizard
jeremylongshore/claude-code-plugins-plusGenerate pivot tables and charts from raw data using natural language - analyze sales by region, summarize data by category, and create visualizations effortlessly
quality-metrics
proffesor-for-testing/agentic-qeMeasure quality effectively with actionable metrics. Use when establishing quality dashboards, defining KPIs, or evaluating test effectiveness.
test-data-management
proffesor-for-testing/agentic-qeStrategic test data generation, management, and privacy compliance. Use when creating test data, handling PII, ensuring GDPR/CCPA compliance, or scaling data generation for realistic testing scenarios.
discover-data
rand/cc-polymathAutomatically discover data pipeline and ETL skills when working with ETL. Activates for data development tasks.
discover-database
rand/cc-polymathAutomatically discover database skills when working with SQL, PostgreSQL, MongoDB, Redis, database schema design, query optimization, migrations, connection pooling, ORMs, or database selection. Activates for database design, optimization, and implementation tasks.
dbt-artifacts
sfc-gh-dflippo/snowflake-dbt-demoMonitor dbt execution using the dbt Artifacts package. Use this skill when you need to track test and model execution history, analyze run patterns over time, monitor data quality metrics, or enable programmatic access to dbt execution metadata across any dbt version or platform.
example-data-processor
fkesheh/skill-mcpProcess CSV data files by cleaning, transforming, and analyzing them. Use this when users need to work with CSV files, clean data, or perform basic data analysis tasks.
csv-processor
CuriousLearner/devkitParse, transform, and analyze CSV files with advanced data manipulation capabilities.
json-transformer
CuriousLearner/devkitTransform, manipulate, and analyze JSON data structures with advanced operations.
query-builder
CuriousLearner/devkitInteractive database query builder for generating optimized SQL and NoSQL queries.
data-analysis
meirm/askGPTPerform data analysis tasks including data cleaning, statistical analysis, visualization, and insight generation. Use when the user asks to analyze data, perform statistical analysis, create visualizations, or extract insights from datasets.
exploratory-data-analysis
lifangda/claude-plugins"EDA toolkit. Analyze CSV/Excel/JSON/Parquet files, statistical summaries, distributions, correlations, outliers, missing data, visualizations, markdown reports, for data profiling and insights."
database-architect
marcioaltoe/claude-craftkitExpert database schema designer and Drizzle ORM specialist. Use when user needs database design, schema creation, migrations, query optimization, or Postgres-specific features. Examples - "design a database schema for users", "create a Drizzle table for products", "help with database relationships", "optimize this query", "add indexes to improve performance", "design database for multi-tenant app".
database-monitoring
aj-geddes/useful-ai-promptsMonitor database performance and health. Use when setting up monitoring, analyzing metrics, or troubleshooting database issues.
database-performance-debugging
aj-geddes/useful-ai-promptsDebug database performance issues through query analysis, index optimization, and execution plan review. Identify and fix slow queries.
stress-testing
aj-geddes/useful-ai-promptsTest system behavior under extreme load conditions to identify breaking points, capacity limits, and failure modes. Use for stress test, capacity testing, breaking point analysis, spike test, and system limits validation.
hypothesis
anam-org/metaxyUse Hypothesis for property-based testing to automatically generate comprehensive test cases, find edge cases, and write more robust tests with minimal example shrinking. Includes Polars parametric testing integration.
database-schema-designer
ArieGoldkin/ai-agent-hubUse this skill when designing database schemas for relational (SQL) or document (NoSQL) databases. Provides normalization guidelines, indexing strategies, migration patterns, and performance optimization techniques. Ensures scalable, maintainable, and performant data models.
executive-dashboard-generator
OneWave-AI/claude-skillsTransform raw data from CSVs, Google Sheets, or databases into executive-ready reports with visualizations, key metrics, trend analysis, and actionable recommendations. Creates data-driven narratives for leadership. Use when users need to turn spreadsheets into executive summaries or board reports.
deployment-pipeline-design
wshobson/agentsDesign multi-stage CI/CD pipelines with approval gates, security checks, and deployment orchestration. Use when architecting deployment workflows, setting up continuous delivery, or implementing GitOps practices.
database-migration
wshobson/agentsExecute database migrations across ORMs and platforms with zero-downtime strategies, data transformation, and rollback procedures. Use when migrating databases, changing schemas, performing data transformations, or implementing zero-downtime deployment strategies.
ml-pipeline-workflow
wshobson/agentsBuild end-to-end MLOps pipelines from data preparation through model training, validation, and production deployment. Use when creating ML pipelines, implementing MLOps practices, or automating model training and deployment workflows.
bd-issue-tracking
steveyegge/beadsTrack complex, multi-session work with dependency graphs using bd (beads) issue tracker. Use when work spans multiple sessions, has complex dependencies, or requires persistent context across compaction cycles. For simple single-session linear tasks, TodoWrite remains appropriate.
insforge-schema-patterns
InsForge/InsForgeDatabase schema patterns for InsForge including social graphs, e-commerce, content publishing, and multi-tenancy with RLS policies. Use when designing data models with relationships, foreign keys, or Row Level Security.
databases
mrgoonie/claudekit-skillsWork with MongoDB (document database, BSON documents, aggregation pipelines, Atlas cloud) and PostgreSQL (relational database, SQL queries, psql CLI, pgAdmin). Use when designing database schemas, writing queries and aggregations, optimizing indexes for performance, performing database migrations, configuring replication and sharding, implementing backup and restore strategies, managing database users and permissions, analyzing query performance, or administering production databases.
wheels-migration-generator
wheels-dev/wheelsGenerate database-agnostic Wheels migrations for creating tables, altering schemas, and managing database changes. Use when creating or modifying database schema, adding tables, columns, indexes, or foreign keys. Prevents database-specific SQL and ensures cross-database compatibility.
wheels-migration-generator
wheels-dev/wheelsGenerate database-agnostic Wheels migrations for creating tables, altering schemas, and managing database changes. Use when creating or modifying database schema, adding tables, columns, indexes, or foreign keys. Prevents database-specific SQL and ensures cross-database compatibility.
pgtap-testing
pgflow-dev/pgflowGuide pgTAP test writing in pgflow. Use when user asks to create tests, write tests, add tests, create test files, fix tests, improve tests, add missing tests, create realtime tests, write database tests, test SQL functions, test broadcast events, test realtime events, add test coverage, create step tests, create run tests, test pgflow functions, or asks how to test database scenarios. Provides test patterns, helper functions, and realtime event testing examples. Use for any pgTAP test creation or modification.
moai-alfred-best-practices
modu-ai/moai-adkQuality gates, compliance patterns, and mandatory rules for Alfred workflow execution. Enforces TRUST 5 principles, TAG validation, Skill invocation rules, and AskUserQuestion scenarios. Use when validating workflow compliance, checking quality gates, enforcing MoAI-ADK standards, or verifying rule adherence.
moai-alfred-workflow
modu-ai/moai-adkGuide 4-step workflow execution with task tracking and quality gates
moai-lang-sql
modu-ai/moai-adkSQL best practices with pgTAP, sqlfluff 3.2, query optimization, and migration management.
database-implementation
jpicklyk/task-orchestratorDatabase schema design, migrations, query optimization with SQL, Exposed ORM, Flyway. Use for database, migration, schema, sql, flyway tags. Provides migration patterns, validation commands, rollback strategies.
senior-data-engineer
alirezarezvani/claude-skillsWorld-class data engineering skill for building scalable data pipelines, ETL/ELT systems, and data infrastructure. Expertise in Python, SQL, Spark, Airflow, dbt, Kafka, and modern data stack. Includes data modeling, pipeline orchestration, data quality, and DataOps. Use when designing data architectures, building data pipelines, optimizing data workflows, or implementing data governance.
backend-migrations
maxritter/claude-codeproCreate and manage database migrations with reversible changes, proper naming conventions, and zero-downtime deployment strategies. Use this skill when creating database migration files, modifying schema, adding or removing tables/columns, managing indexes, or handling data migrations. Apply when working with migration files (e.g., db/migrate/, migrations/, alembic/, sequelize migrations), schema changes, database versioning, rollback implementations, or when you need to ensure backwards compatibility during deployments. Use for any task involving database structure changes, index creation, constraint modifications, or data transformation scripts.
backend-queries
maxritter/claude-codeproWrite secure, optimized database queries using parameterized queries, eager loading to prevent N+1 problems, and strategic indexing for performance. Use this skill when writing SQL queries, ORM queries, database interactions, or optimizing data fetching logic. Apply when working with query files, repository patterns, data access layers, SQL statements, ORM methods (ActiveRecord, Sequelize, Prisma queries), JOIN operations, WHERE clauses, preventing SQL injection, implementing eager loading or includes, adding query timeouts, wrapping operations in transactions, or caching expensive queries. Use for any task involving database reads, writes, complex queries, query optimization, or data fetching performance.
aws-elastic-beanstalk-deployment-best-practices
pr-pm/prpm"Robust deployment patterns for Elastic Beanstalk with GitHub Actions, Pulumi, and edge case handling"
rr-drizzle
roderik/ai-rulesComprehensive guidance for implementing type-safe database operations with Drizzle ORM and PostgreSQL. Use when working with database schemas, queries, migrations, or performance optimization in TypeScript applications. Automatically triggered when working with Drizzle schema files, database queries, or PostgreSQL operations.
executing-plans
starwards/starwardsStructured approach for implementing architect-provided plans through controlled batch execution with review checkpoints - execute in batches (default 3 tasks), verify each step, stop on blockers; don't force through blockers
prisma
blencorp/claude-code-kitPrisma ORM patterns including Prisma Client usage, queries, mutations, relations, transactions, and schema management. Use when working with Prisma database operations or schema definitions.
dbt-commands
sfc-gh-dflippo/snowflake-dbt-demodbt command-line operations, model selection syntax, Jinja patterns, troubleshooting, and debugging. Use this skill when running dbt commands, selecting specific models, debugging compilation errors, using Jinja macros, or troubleshooting dbt execution issues.
dbt-core
sfc-gh-dflippo/snowflake-dbt-demoManaging dbt-core locally - installation, configuration, project setup, package management, troubleshooting, and development workflow. Use this skill for all aspects of local dbt-core development including non-interactive scripts for environment setup with conda or venv, and comprehensive configuration templates for profiles.yml and dbt_project.yml.
dbt-materializations
sfc-gh-dflippo/snowflake-dbt-demoChoosing and implementing dbt materializations (ephemeral, view, table, incremental, snapshots, Python models). Use this skill when deciding on materialization strategy, implementing incremental models, setting up snapshots for SCD Type 2 tracking, or creating Python models for machine learning workloads.
dbt-modeling
sfc-gh-dflippo/snowflake-dbt-demoWriting dbt models with proper CTE patterns, SQL structure, and layer-specific templates. Use this skill when writing or refactoring dbt models, implementing CTE patterns, creating staging/intermediate/mart models, or ensuring proper SQL structure and dependencies.
dbt-performance
sfc-gh-dflippo/snowflake-dbt-demoOptimizing dbt and Snowflake performance through materialization choices, clustering keys, warehouse sizing, and query optimization. Use this skill when addressing slow model builds, optimizing query performance, sizing warehouses, implementing clustering strategies, or troubleshooting performance issues.
dbt-projects-on-snowflake
sfc-gh-dflippo/snowflake-dbt-demoDeploying, managing, executing, and monitoring dbt projects natively within Snowflake using dbt PROJECT objects and event tables. Use this skill when you want to set up dbt development workspaces, deploy projects to Snowflake, schedule automated runs, monitor execution with event tables, or enable team collaboration directly in Snowflake.
dbt-projects-snowflake-setup
sfc-gh-dflippo/snowflake-dbt-demoStep-by-step setup guide for dbt Projects on Snowflake including prerequisites, external access integration, Git API integration, event table configuration, and automated scheduling. Use this skill when setting up dbt Projects on Snowflake for the first time or troubleshooting setup issues.
dbt-testing
sfc-gh-dflippo/snowflake-dbt-demodbt testing strategies using dbt_constraints for database-level enforcement, generic tests, and singular tests. Use this skill when implementing data quality checks, adding primary/foreign key constraints, creating custom tests, or establishing comprehensive testing frameworks across bronze/silver/gold layers.
schemachange
sfc-gh-dflippo/snowflake-dbt-demoDeploying and managing Snowflake database objects using version control with schemachange. Use this skill when you need to manage database migrations for objects not handled by dbt, implement CI/CD pipelines for schema changes, or coordinate deployments across multiple environments.
snowflake-cli
sfc-gh-dflippo/snowflake-dbt-demoExecuting SQL, managing Snowflake objects, deploying applications, and orchestrating data pipelines using the Snowflake CLI (snow) command. Use this skill when you need to run SQL scripts, deploy Streamlit apps, execute Snowpark procedures, manage stages, automate Snowflake operations from CI/CD pipelines, or work with variables and templating.
snowflake-connections
sfc-gh-dflippo/snowflake-dbt-demoConfiguring Snowflake connections using connections.toml (for Snowflake CLI, Streamlit, Snowpark) or profiles.yml (for dbt) with multiple authentication methods (SSO, key pair, username/password, OAuth), managing multiple environments, and overriding settings with environment variables. Use this skill when setting up Snowflake CLI, Streamlit apps, dbt, or any tool requiring Snowflake authentication and connection management.
streamlit-development
sfc-gh-dflippo/snowflake-dbt-demoDeveloping, testing, and deploying Streamlit data applications on Snowflake. Use this skill when you're building interactive data apps, setting up local development environments, testing with pytest or Playwright, or deploying apps to Snowflake using Streamlit in Snowflake.
ray-data
zechenzhangAGI/claude-ai-research-skills"Scalable data processing for ML workloads. Streaming execution across CPU/GPU, supports Parquet/CSV/JSON/images. Integrates with Ray Train, PyTorch, TensorFlow. Scales from single machine to 100s of nodes. Use for batch inference, data preprocessing, multi-modal data loading, or distributed ETL pipelines."
grey-haven-data-modeling
greyhaven-ai/claude-code-configDesign database schemas for Grey Haven multi-tenant SaaS - SQLModel models, Drizzle schema, multi-tenant isolation with tenant_id and RLS, timestamp fields, foreign keys, indexes, migrations, and relationships. Use when creating database tables.
grey-haven-database-conventions
greyhaven-ai/claude-code-config"Apply Grey Haven database conventions - snake_case fields, multi-tenant with tenant_id and RLS, proper indexing, migrations for Drizzle (TypeScript) and SQLModel (Python). Use when designing schemas, writing database code, creating migrations, setting up RLS policies, or when user mentions 'database', 'schema', 'Drizzle', 'SQLModel', 'migration', 'RLS', 'tenant_id', 'snake_case', 'indexes', or 'foreign keys'."
grey-haven-ontological-documentation
greyhaven-ai/claude-code-configCreate comprehensive ontological documentation for Grey Haven systems - extract domain concepts from TanStack Start and FastAPI codebases, model semantic relationships, generate visual representations of system architecture, and document business domains. Use when onboarding, documenting architecture, or analyzing legacy systems.
cocoindex
cocoindex-io/cocoindex-claudeComprehensive toolkit for developing with the CocoIndex library. Use when users need to create data transformation pipelines (flows), write custom functions, or operate flows via CLI or API. Covers building ETL workflows for AI data processing, including embedding documents into vector databases, building knowledge graphs, creating search indexes, or processing data streams with incremental updates.
aws-sdk-java-v2-dynamodb
giuseppe-trisciuoglio/developer-kitAmazon DynamoDB patterns using AWS SDK for Java 2.x. Use when creating, querying, scanning, or performing CRUD operations on DynamoDB tables, working with indexes, batch operations, transactions, or integrating with Spring Boot applications.
sql
maragudk/skillsGuide for working with SQL queries, in particular for SQLite. Use this skill when writing SQL queries, analyzing database schemas, designing migrations, or working with SQLite-related code.
docker-helper
CuriousLearner/devkitDocker Compose generation, optimization, and troubleshooting assistance.
query-optimizer
CuriousLearner/devkitAnalyze and optimize SQL queries for better performance and efficiency.
schema-visualizer
CuriousLearner/devkitGenerate database schema diagrams, ERDs, and documentation from database schemas.
apache-airflow-orchestration
manutej/luxor-claude-marketplaceComplete guide for Apache Airflow orchestration including DAGs, operators, sensors, XComs, task dependencies, dynamic workflows, and production deployment
apache-spark-data-processing
manutej/luxor-claude-marketplaceComplete guide for Apache Spark data processing including RDDs, DataFrames, Spark SQL, streaming, MLlib, and production deployment
dbt-data-transformation
manutej/luxor-claude-marketplaceComplete guide for dbt data transformation including models, tests, documentation, incremental builds, macros, packages, and production workflows
kafka-stream-processing
manutej/luxor-claude-marketplaceComplete guide for Apache Kafka stream processing including producers, consumers, Kafka Streams, connectors, schema registry, and production deployment
backend-migrations
coreyja/coreyjaCreate and manage database migrations with proper rollback methods, focused changes, and zero-downtime deployment considerations. Use this skill when creating new database migration files, modifying table schemas, adding or removing columns, creating or dropping indexes, or managing database version control. When working with migration directories, schema definition files, or database change scripts. When implementing backwards-compatible database changes for production deployments. When separating schema changes from data migrations.
managing-bd-tasks
withzombies/hyperpowersUse for advanced bd operations beyond basic create/close - splitting tasks mid-flight, merging duplicates, changing dependencies, archiving epics, querying for metrics, managing cross-epic dependencies
aws-rds-database
aj-geddes/useful-ai-promptsDeploy and manage relational databases using RDS with Multi-AZ, read replicas, backups, and encryption. Use for PostgreSQL, MySQL, MariaDB, and Oracle.
cloud-migration-planning
aj-geddes/useful-ai-promptsPlan and execute cloud migrations with assessment, database migration, application refactoring, and cutover strategies across AWS, Azure, and GCP.
data-cleaning-pipeline
aj-geddes/useful-ai-promptsBuild robust processes for data cleaning, missing value imputation, outlier handling, and data transformation for data preprocessing, data quality, and data pipeline automation
data-migration-scripts
aj-geddes/useful-ai-promptsCreate safe, reversible database migration scripts with rollback capabilities, data validation, and zero-downtime deployments. Use when changing database schemas, migrating data between systems, or performing large-scale data transformations.
database-migration-management
aj-geddes/useful-ai-promptsManage database migrations and schema versioning. Use when planning migrations, version control, rollback strategies, or data transformations in PostgreSQL and MySQL.
database-query-optimization
aj-geddes/useful-ai-promptsImprove database query performance through indexing, query optimization, and execution plan analysis. Reduce response times and database load.
database-schema-design
aj-geddes/useful-ai-promptsDesign database schemas with normalization, relationships, and constraints. Use when creating new database schemas, designing tables, or planning data models for PostgreSQL and MySQL.
event-sourcing
aj-geddes/useful-ai-promptsImplement event sourcing and CQRS patterns using event stores, aggregates, and projections. Use when building audit trails, temporal queries, or systems requiring full history.
ml-pipeline-automation
aj-geddes/useful-ai-promptsBuild end-to-end ML pipelines with automated data processing, training, validation, and deployment using Airflow, Kubeflow, and Jenkins
nosql-database-design
aj-geddes/useful-ai-promptsDesign NoSQL database schemas for MongoDB and DynamoDB. Use when modeling document structures, designing collections, or planning NoSQL data architectures.
sql-query-optimization
aj-geddes/useful-ai-promptsAnalyze and optimize SQL queries for performance. Use when improving slow queries, reducing execution time, or analyzing query performance in PostgreSQL and MySQL.
database-schema-designer
OneWave-AI/claude-skillsDesign optimized database schemas for SQL and NoSQL databases including tables, relationships, indexes, and constraints. Creates ERD diagrams, migration scripts, and data modeling best practices. Use when users need database design, schema optimization, or data architecture planning.
orchestration
williamzujkowski/standardsOrchestration standards for orchestration in Data Engineering environments.
database-design
akaszubski/autonomous-devDatabase schema design, migrations, query optimization, and ORM patterns. Use when designing database schemas, writing migrations, optimizing queries, or working with ORMs like SQLAlchemy or Django ORM.
specweave-ado-mapper
anton-abyzov/specweaveExpert in bidirectional conversion between SpecWeave increments and Azure DevOps (ADO) Epics/Features/User Stories/Tasks. Handles export (increment → ADO), import (ADO → increment), and bidirectional sync with conflict resolution. Activates for ADO sync, Azure DevOps sync, work item creation, import from ADO.
python-backend
anton-abyzov/specweavePython backend developer for FastAPI, Django, Flask APIs with SQLAlchemy, Django ORM, Pydantic validation. Implements REST APIs, async operations, database integration, authentication, data processing with pandas/numpy, machine learning integration, background tasks with Celery, API documentation with OpenAPI/Swagger. Activates for: Python, Python backend, FastAPI, Django, Flask, SQLAlchemy, Django ORM, Pydantic, async Python, asyncio, uvicorn, REST API Python, authentication Python, pandas, numpy, data processing, machine learning, ML API, Celery, Redis Python, PostgreSQL Python, MongoDB Python, type hints, Python typing.
erd-skill
Byunk/claude-code-toolkitComprehensive database design and ERD (Entity-Relationship Diagram) toolkit using DBML format. This skill should be used when creating database schemas from requirements, analyzing existing DBML files for improvements, designing database architecture, or providing guidance on database modeling, normalization, indexing, and relationships.
logseq-db-knowledge
kerim/logseq-db-knowledgeEssential knowledge about Logseq DB (database) graphs. Use this skill when working with Logseq DB to ensure accurate understanding of nodes, properties, tags, tasks, and queries. This corrects common misconceptions from file-based Logseq that do NOT apply to DB graphs.
rewrite-yaml
sjungling/claude-pluginsExpert in test-first development of production-quality OpenRewrite recipes for YAML manipulation using LST structure, visitor patterns, and JsonPath matching. Automatically activates when working with OpenRewrite recipe files or Java files in `src/main/java/**/rewrite/**` directories.
recipe-writer
sjungling/claude-pluginsExpert in test-first development of production-quality OpenRewrite recipes for automated code refactoring. Automatically activates when working with OpenRewrite recipe files, Java/YAML files in `src/main/java/**/rewrite/**` directories, writing tests implementing `RewriteTest`, or when users ask about recipe development, writing recipes, creating migrations, LST manipulation, JavaTemplate usage, visitor patterns, preconditions, scanning recipes, YAML recipes, GitHub Actions transformations, Kubernetes manifest updates, or code migration strategies. Guides recipe type selection (declarative/Refaster/imperative), visitor implementation, and test-driven development workflows.
claude-compass-best-practices
AizenvoltPrime/claude-compassEnforce Claude Compass development standards and best practices. This skill should be used when writing or modifying code in the Claude Compass repository, including parsers, database migrations, graph builders, MCP tools, and core services. It ensures adherence to code quality principles, proper error handling, self-documenting code, and established architectural patterns.
brokle-migration-workflow
brokle-ai/brokleUse this skill when creating, running, or managing database migrations for PostgreSQL or ClickHouse. This includes creating new migrations, running migrations, checking migration status, rollback operations, seeding data, or troubleshooting migration issues.
jj-hierarchical-data-management
Dmccarty30/Journeyman-JobsDesign hierarchical data structures for electrical trades platform. Covers IBEW organizational hierarchy (Nation→Region→Local→Jobs), job models with per diem, crew hierarchies, job aggregation pipelines, offline-first sync, territory-based queries, and multi-level caching. Use when designing data models, implementing offline sync, or structuring territorial data.
backend-dev-guidelines
DojoCodingLabs/claude-code-waypointComprehensive backend development guide for Supabase Edge Functions + PostgreSQL. Use when working with Supabase (database, auth, storage, realtime), Edge Functions, PostgreSQL, Row-Level Security (RLS), Resend email, Stripe payments, or TypeScript backend patterns. Covers database design, auth flows, Edge Function patterns, RLS policies, email integration, payment processing, and deployment to Supabase.
sql-cli
Interstellar-code/claud-skillsToken-efficient MySQL/PostgreSQL operations using mycli and native CLI tools (Windows/Mac/Linux compatible). Replaces Artisan Tinker for database queries with 87% token savings.