#WhatsApp Group Analytics: Monitor, Measure, Act
Which Groups Are Thriving?
You're running 10 WhatsApp groups. Maybe it's customer communities for different products. Regional support channels. Internal team coordination. Whatever the use case, you've got thousands of members across multiple groups.
Here's the problem: you have no idea what's actually happening in them.
Which groups are active? Which are dead? Who are the power users? When do most messages arrive? What topics drive engagement? Without data, you're flying blind. Managing groups becomes reactive—you only notice problems when someone complains.
There's a better way. With MoltFlow's group monitoring API, you can track every message, analyze engagement patterns, and set up automated alerts. Turn your WhatsApp groups into a data-driven operation.
Let me show you how.
What to Track
Not all metrics matter. Focus on the ones that drive decisions.
Core engagement metrics:
| Metric | What It Tells You | Why It Matters |
|---|---|---|
| Messages per day | Overall activity level | Identifies dead vs. thriving groups |
| Unique active members | Actual participation | 1,000 members but only 10 posting? That's a problem |
| Response time | Conversation flow | Fast responses = healthy community |
| Peak activity hours | When members are online | Schedule announcements for maximum reach |
| Top contributors | Power users | Potential moderators or brand advocates |
| Growth rate | New members joining | Community expanding or stagnating? |
Advanced metrics for mature communities:
- Topic clustering: What are people talking about? (Product issues, feature requests, general chat)
- Sentiment trends: Is the mood positive, negative, or neutral?
- Member lifecycle: Time to first message, days until inactive, churn rate
- Admin response rate: How fast are moderators addressing questions?
You don't need all of these on day one. Start with messages per day and unique active members. Add complexity as you scale.
Step 1: Set Up Group Monitoring
First, tell MoltFlow which groups to watch.
curl -X POST https://apiv2.waiflow.app/api/v2/groups \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"session_name": "community-monitor",
"wa_group_id": "[email protected]",
"name": "Product Community - North America",
"monitoring_enabled": true
}'How to get the group ID: Send a message to the group via WhatsApp Web, then check the URL. It'll look like https://web.whatsapp.com/chat/[email protected]—that's your group ID.
Or use MoltFlow's session endpoint to list all groups:
import requests
API_BASE = "https://apiv2.waiflow.app/api/v2"
API_TOKEN = "your_api_key_here"
SESSION = "community-monitor"
def get_all_groups():
"""Fetch all groups from WhatsApp session"""
headers = {
"Authorization": f"Bearer {API_TOKEN}",
"Content-Type": "application/json"
}
# Get all chats (includes groups)
response = requests.get(
f"{API_BASE}/sessions/{SESSION}/chats",
headers=headers
)
if response.status_code != 200:
print(f"Failed to fetch chats: {response.text}")
return []
chats = response.json()
# Filter for groups only
groups = [
chat for chat in chats
if chat['id'].endswith('@g.us')
]
return groups
# Example: Register all groups for monitoring
groups = get_all_groups()
for group in groups:
register_for_monitoring(group['id'], group['name'])Once registered, MoltFlow starts capturing every message sent in that group. All events flow to your webhook endpoint.
Step 2: Capture Group Events
MoltFlow fires webhooks for every group activity. You need to catch these events and extract the data.
from flask import Flask, request, jsonify
from datetime import datetime
import json
app = Flask(__name__)
@app.route('/webhooks/group-events', methods=['POST'])
def handle_group_event():
"""Process incoming group message events"""
event = request.json
# Check if this is a group message
if event.get('event') != 'message':
return jsonify({'status': 'ignored'}), 200
payload = event.get('payload', {})
message = payload.get('message', {})
# Extract group message data
group_id = message.get('from', '')
if not group_id.endswith('@g.us'):
# Not a group message, skip
return jsonify({'status': 'not_group'}), 200
# Parse message details
event_data = {
'group_id': group_id,
'sender': message.get('author', '').replace('@c.us', ''),
'message_id': message.get('id', ''),
'timestamp': message.get('timestamp'),
'message_type': message.get('type', 'text'),
'text': message.get('text', {}).get('body', '') if message.get('type') == 'text' else None,
'has_media': message.get('type') in ['image', 'video', 'document'],
'is_reply': message.get('hasQuotedMsg', False)
}
# Store in database
save_group_event(event_data)
return jsonify({'status': 'processed'}), 200
def save_group_event(event_data):
"""Save group event to database for analytics"""
# Example schema for PostgreSQL:
# CREATE TABLE group_events (
# id SERIAL PRIMARY KEY,
# group_id VARCHAR(100) NOT NULL,
# sender VARCHAR(50) NOT NULL,
# message_id VARCHAR(100) UNIQUE NOT NULL,
# timestamp BIGINT NOT NULL,
# message_type VARCHAR(20),
# text TEXT,
# has_media BOOLEAN,
# is_reply BOOLEAN,
# created_at TIMESTAMP DEFAULT NOW()
# );
# Your database insertion logic here
print(f"Stored event from {event_data['sender']} in group {event_data['group_id']}")
if __name__ == '__main__':
app.run(port=5000)Schema design tip: Store raw timestamps (Unix epoch) from WhatsApp, not converted datetimes. Makes timezone math easier later.
What about deletions? WhatsApp also fires events when messages are deleted. Capture those separately if you need audit trails.
Step 3: Calculate Engagement Metrics
Raw events are noise. Aggregated metrics are signals.
Here's how to compute the core engagement metrics:
from datetime import datetime, timedelta
from collections import defaultdict, Counter
def calculate_group_metrics(group_id, days=7):
"""
Calculate engagement metrics for a group over last N days
Returns:
{
'messages_per_day': float,
'unique_active_members': int,
'top_contributors': [(member, message_count), ...],
'peak_hours': {hour: message_count},
'avg_response_time': float (seconds)
}
"""
# Fetch events from database
events = get_group_events(group_id, days=days)
if not events:
return None
# Messages per day
total_messages = len(events)
messages_per_day = total_messages / days
# Unique active members
unique_senders = set(e['sender'] for e in events)
unique_active_members = len(unique_senders)
# Top contributors (top 10)
sender_counts = Counter(e['sender'] for e in events)
top_contributors = sender_counts.most_common(10)
# Peak hours (0-23)
peak_hours = defaultdict(int)
for event in events:
hour = datetime.fromtimestamp(event['timestamp']).hour
peak_hours[hour] += 1
# Sort peak hours by message count
peak_hours = dict(sorted(peak_hours.items(), key=lambda x: x[1], reverse=True))
# Average response time (time between messages in conversations)
response_times = []
for i in range(1, len(events)):
# If message is a reply, calculate time delta
if events[i]['is_reply']:
time_diff = events[i]['timestamp'] - events[i-1]['timestamp']
if time_diff < 3600: # Only count if within 1 hour
response_times.append(time_diff)
avg_response_time = sum(response_times) / len(response_times) if response_times else None
return {
'group_id': group_id,
'period_days': days,
'messages_per_day': round(messages_per_day, 1),
'total_messages': total_messages,
'unique_active_members': unique_active_members,
'top_contributors': top_contributors,
'peak_hours': peak_hours,
'avg_response_time': round(avg_response_time, 1) if avg_response_time else None
}
def get_group_events(group_id, days=7):
"""Fetch group events from database"""
# Your database query here
# Filter by group_id and timestamp > (now - days)
pass
# Example usage
metrics = calculate_group_metrics('[email protected]', days=30)
print(f"Messages per day: {metrics['messages_per_day']}")
print(f"Active members: {metrics['unique_active_members']}")
print(f"Top contributor: {metrics['top_contributors'][0][0]} ({metrics['top_contributors'][0][1]} messages)")
print(f"Peak hour: {list(metrics['peak_hours'].keys())[0]}:00")Insight: If your "peak hour" is 2 AM, you've got an international community. Schedule announcements accordingly.
Benchmarking: What's healthy?
- Consumer communities: 10-20 messages/day per 100 members is active
- Support groups: 50-100 messages/day indicates strong engagement
- Internal teams: 100+ messages/day is typical for active projects
But don't compare apples to oranges. Compare your groups to themselves over time.
Step 4: Build an Analytics Dashboard
Metrics in a database don't drive decisions. Metrics in a dashboard do.
You've got two options: Google Sheets (fast, no code) or custom dashboard (more control).
Option A: Google Sheets Dashboard
Push metrics to Google Sheets using their API:
from google.oauth2.service_account import Credentials
from googleapiclient.discovery import build
def push_to_sheets(metrics):
"""Push group metrics to Google Sheets for dashboard"""
# Authenticate (use service account)
creds = Credentials.from_service_account_file(
'google-credentials.json',
scopes=['https://www.googleapis.com/auth/spreadsheets']
)
service = build('sheets', 'v4', credentials=creds)
spreadsheet_id = 'your_spreadsheet_id'
# Prepare data row
row = [
datetime.now().isoformat(),
metrics['group_id'],
metrics['messages_per_day'],
metrics['unique_active_members'],
metrics['total_messages'],
metrics['top_contributors'][0][0] if metrics['top_contributors'] else '',
list(metrics['peak_hours'].keys())[0] if metrics['peak_hours'] else ''
]
# Append to sheet
body = {'values': [row]}
result = service.spreadsheets().values().append(
spreadsheetId=spreadsheet_id,
range='Metrics!A:G',
valueInputOption='RAW',
body=body
).execute()
print(f"Added row to sheet: {result.get('updates').get('updatedRows')} rows")
# Run daily via cron
for group_id in monitored_groups:
metrics = calculate_group_metrics(group_id, days=1)
push_to_sheets(metrics)Then create charts in Google Sheets. Line chart for messages per day. Bar chart for top contributors. Heatmap for peak hours.
Pro tip: Use Google Data Studio (free) to build a real-time dashboard on top of your Sheets data.
Option B: Custom Dashboard
For production systems, build a proper analytics dashboard:
# API endpoint for dashboard
from flask import Flask, jsonify
@app.route('/api/group-analytics/<group_id>')
def get_group_analytics(group_id):
"""Serve group analytics for dashboard"""
# Calculate metrics
metrics_7d = calculate_group_metrics(group_id, days=7)
metrics_30d = calculate_group_metrics(group_id, days=30)
# Trend analysis
trend = calculate_trend(group_id)
return jsonify({
'group_id': group_id,
'last_7_days': metrics_7d,
'last_30_days': metrics_30d,
'trend': trend # 'up', 'down', 'stable'
})
def calculate_trend(group_id):
"""Calculate if group engagement is trending up or down"""
this_week = calculate_group_metrics(group_id, days=7)
last_week = calculate_group_metrics_offset(group_id, days=7, offset=7)
if not last_week:
return 'new'
change = (this_week['messages_per_day'] - last_week['messages_per_day']) / last_week['messages_per_day']
if change > 0.1:
return 'up'
elif change < -0.1:
return 'down'
else:
return 'stable'Render this data with React, Next.js, or whatever you're using for your admin panel.
Step 5: Set Up Automated Alerts
Analytics are reactive. Alerts are proactive.
Set up automatic notifications when something goes wrong:
def check_group_health(group_id):
"""Monitor group health and alert if issues detected"""
metrics = calculate_group_metrics(group_id, days=7)
# Alert conditions
alerts = []
# 1. Activity dropped below threshold
if metrics['messages_per_day'] < 5:
alerts.append({
'type': 'low_activity',
'severity': 'warning',
'message': f"Group activity dropped to {metrics['messages_per_day']} msgs/day"
})
# 2. No new active members in 7 days
if metrics['unique_active_members'] < 3:
alerts.append({
'type': 'engagement_risk',
'severity': 'warning',
'message': f"Only {metrics['unique_active_members']} active members this week"
})
# 3. Spam detection (single user posting >50% of messages)
if metrics['top_contributors']:
top_user_pct = metrics['top_contributors'][0][1] / metrics['total_messages']
if top_user_pct > 0.5:
alerts.append({
'type': 'spam_risk',
'severity': 'high',
'message': f"User {metrics['top_contributors'][0][0]} posted {top_user_pct*100:.0f}% of messages"
})
# Send alerts
for alert in alerts:
send_alert(group_id, alert)
return alerts
def send_alert(group_id, alert):
"""Send alert to admin (Slack, email, WhatsApp, etc.)"""
# Example: Send to Slack
slack_webhook_url = "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"
message = f"""
🚨 Group Alert: {alert['severity'].upper()}
Group: {group_id}
Issue: {alert['type']}
Details: {alert['message']}
"""
requests.post(slack_webhook_url, json={'text': message})
# Or send alert via WhatsApp to admin number
send_whatsapp_message(
phone="admin_phone_number",
text=f"⚠️ {alert['message']}"
)
# Run health checks every 6 hours
import schedule
import time
schedule.every(6).hours.do(lambda: check_group_health('[email protected]'))
while True:
schedule.run_pending()
time.sleep(60)Alert fatigue is real. Don't notify on every small change. Set thresholds that matter. Adjust over time as you learn what's normal for each group.
Anomaly detection: Instead of fixed thresholds, use statistical methods. If messages/day drops 2 standard deviations below the 30-day average, something's wrong.
Advanced: Sentiment Analysis
Want to know if your community is happy or frustrated?
from transformers import pipeline
# Load sentiment analysis model (run once, cache result)
sentiment_analyzer = pipeline("sentiment-analysis", model="distilbert-base-uncased-finetuned-sst-2-english")
def analyze_group_sentiment(group_id, days=7):
"""Calculate sentiment score for group messages"""
events = get_group_events(group_id, days=days)
text_messages = [e['text'] for e in events if e['text']]
if not text_messages:
return None
# Batch analyze (more efficient than one-by-one)
results = sentiment_analyzer(text_messages[:100]) # Limit for speed
# Aggregate sentiment
positive = sum(1 for r in results if r['label'] == 'POSITIVE')
negative = sum(1 for r in results if r['label'] == 'NEGATIVE')
sentiment_score = (positive - negative) / len(results)
return {
'sentiment_score': round(sentiment_score, 2), # -1 to +1
'positive_messages': positive,
'negative_messages': negative,
'neutral_messages': len(results) - positive - negative
}
# Usage
sentiment = analyze_group_sentiment('[email protected]', days=7)
print(f"Group sentiment: {sentiment['sentiment_score']} ({'positive' if sentiment['sentiment_score'] > 0 else 'negative'})")This uses a free Hugging Face model. No API costs. Runs locally or in your backend.
Warning: Sentiment analysis on short messages is noisy. Use for directional trends ("sentiment improving this month"), not absolute judgments.
What's Next?
You've built a complete group analytics system. But there's more you can do:
- Member engagement scores: Rank members by participation, identify lurkers vs contributors
- Topic modeling: Automatically categorize discussions (product bugs, feature requests, general chat)
- Predictive churn: Identify members who are becoming inactive before they leave
- Automated moderation: Flag potentially problematic messages for human review
Continue learning:
- WhatsApp Group Lead Generation — Turn group conversations into qualified leads
- WhatsApp Conversion Metrics — Track ROI and optimize campaigns
- Lead Scoring Automation — Prioritize high-intent prospects automatically
- Track Message Usage & Plan Limits — Monitor API usage and quotas
Ready to implement group analytics? Follow our step-by-step guide: Monitor WhatsApp Groups for Leads
Sign up for MoltFlow and connect your first group in under 5 minutes. Need help? Check our API documentation—we're here to help.
> Try MoltFlow Free — 100 messages/month