Writing Effective Queries for the Data Assistant

The Data Assistant is designed to help you query data using natural language. However, it may not always interpret vague, incomplete, or overly broad questions the way you intend. To ensure accurate and useful results, it is important to be clear and specific in your queries.

How to Structure Effective Questions for the Data Assistant

To get accurate and meaningful answers from the Data Assistant, it is important to structure your questions clearly. While the assistant is built to interpret natural language, it performs best when your queries are precise, structured, and unambiguous.

A well-formed question typically includes specific elements that guide the assistant, such as the data you are interested in, a relevant time frame, any necessary filters (like location or device type), and optionally a comparison or reason. These components help narrow the scope and ensure you receive results that are accurate and actionable.

The table below outlines key parts of an effective question and pairs them with broad guidelines to help you avoid common pitfalls. Use this as a reference when crafting your queries. Even including just two or three of these elements can greatly improve your results.

Component / Tip Description What to avoid / Example What to ask instead

Start with a question word

Use “What”, “Which”, “How many”, or “Is there…” to frame a clear query.

Just keywords like "Top stations"

"Which stations had the most sessions in Q1 2025?"

Specify the metric or data point

State clearly what you’re asking about — sessions, energy, uptime, etc.

"Driver growth"

"Number of new driver sign ups each month in 2025"

Add a filter or condition

Use qualifiers like location, station type, or status to narrow your query.

"Energy dispensed"

"Energy dispensed by dual-port stations in California"

Include a time frame

Always specify the period of interest to get accurate, scoped results.

"Utilization rate"

"Utilization rate in May 2025"

Comparison or reasoning (if needed)

Ask for comparisons across time or categories, or include the reason for inquiry.

"What changed?"

"Compare station uptime in March and April 2025"

Be specific

Avoid general terms like “top” or “high-performing” unless defined.

"Top stations"

"Top 10 stations by total sessions in Q1 2025"

Define scope (location, group, type)

Clarify geography, user group, charger model, etc.

"Revenue by region"

"Monthly revenue in California and Texas for 2025"

Clarify metrics

Use measurable indicators like “sessions”, “energy”, “uptime”, or “signups”.

"Performance summary"

"Display uptime and total sessions for March 2025"

Avoid subjective or vague terms

Terms like “popular”, “frequent”, “problematic” are open to interpretation.

"Most problematic stations"

"Stations with more than 5 faults reported in April 2025"

Use complete questions

Instead of keywords, write full thoughts or natural-language questions.

"Energy May 2025"

"Display total energy dispensed by all stations in May 2025"

Examples of Common Questions and How to Improve Them

The table below lists examples of some common statements that may not produce the desired outcome, explains why they might be misunderstood, and offers better ways to phrase them, along with tips to improve your overall query quality.

Statement What is wrong Better ask Tip

Top 10 stations

"Top" is ambiguous, by sessions, energy, revenue?

Display the top 10 stations by total energy dispensed in Q1 2025. List the 10 stations with the most charging sessions in California in May 2025.

Always specify metric, location, and time period.

Highly utilized stations

"Utilized" is subjective, what metric?

List stations with utilization over 85% in May 2025. Display stations with more than 200 sessions in April 2025.

Define what "utilization" means and provide a threshold.

Revenue by region

Too broad, unclear revenue type, region, or period.

Display total revenue by region for Q1 2025. What was the monthly revenue in California and Texas in 2024?

Specify type, region, and time frame.

Faulty stations

"Faulty" is vague, severity, type, or period not defined.

List stations with any fault reported in the last 7 days. Display stations with critical faults in March 2025.

Define fault severity, fault type, and time frame.

Driver growth

Could refer to signups, actives, or paid drivers.

How many new drivers signed up each month in 2024? Display active drivers by month in Q1 2025.

Clarify user type and time period.

Busy times

Unclear whether you mean peak hours, days, or wait times.

What were the peak charging hours at Station 123 in May 2025? Which hours had the highest session count in San Jose stations last month?

Specify metric and location.

Inactive users

"Inactive" could mean no sessions, logins, or payments.

List users with no sessions in the past 90 days. Display users inactive since January 2025.

Define inactivity and time period.

Low usage stations

"Low" is undefined, sessions, energy, or uptime?

Display stations with fewer than 20 sessions per week in May 2025. List stations that dispensed less than 100 kWh last month.

Use quantifiable thresholds and specific metrics.

How are we doing?

Too general, performance in what area?

What was total revenue in Q1 2025? Compare average uptime in May vs April 2025.

Narrow it down to specific metrics.

Station issues

"Issues" can mean faults, downtime, or complaints.

List stations with more than 3 hours of downtime in May 2025. Display stations with connector errors in April 2025.

Specify issue type, threshold, and time frame.

Performance summary

"Performance" is vague, for whom and by what?

Display station uptime and session count for March 2025. Summarize driver activity trends for Q1 2025.

Clarify who and what you are evaluating.

Session data

Too broad, sessions could mean count, duration, etc.

What was the average session duration in May 2025? Display total sessions last week.

Specify what you want to know about sessions.

Compare locations

Unclear what to compare, energy, revenue, sessions?

Compare energy dispensed between LA and SD in Q1 2025. Which city had more sessions in April 2025?

Define the comparison metric and locations.

Which stations are down?

Unclear if you mean now, historically, or by fault type.

List stations currently offline. Which stations had over 2 hours downtime in the past week?

Clarify time frame and fault type.

Top complaints

Too general, from whom, about what, when?

What were the most frequent complaints in April 2025? List top 5 complaint categories from last quarter.

Specify complaint source, category, and time period.

Average usage

"Average" of what, per user, per station, by energy?

What was average energy per station in May 2025? Display average sessions per user in Q1 2025.

Define the basis and scope of "average".

Station uptime

Incomplete, which station, what time period?

Display uptime for Station 123 in April 2025. List stations with uptime below 95% in Q2 2025.

Include station and time frame.

Growth over time

Vague, growth in what? Over what period?

Display monthly revenue growth from Jan–May 2025. How did new driver sign ups change since 2024?

Specify growth type and time frame.

Station activity

Too vague, activity in terms of sessions, uptime, or energy?

Display total sessions and uptime for Station 456 in May 2025.

Specify the type of activity and the station.

Top users

Unclear what makes a user “top”, most sessions, energy, or revenue?

List users with the highest total sessions in Q1 2025.

Define what makes a user “top” and include a time period.

Worst performing stations

“Worst” is subjective, by faults, downtime, or session failure?

Display stations with the most fault incidents in April 2025.

Be specific about performance metrics and include a time frame.

Session trends

Too general, trend in count, duration, or station usage?

Display weekly session count trends for California from Jan to May 2025.

Clarify the metric and time frame.

Charging behavior

Vague, are you asking about session frequency, duration, or time of day?

Display average session duration and frequency per driver for Q2 2025.

Define the behavior you're analyzing.

Session breakdown

Ambiguous, breakdown by what? Time? Location? Charger type?

Display session breakdown by connector type in April 2025.

Specify the dimension for the breakdown.

Stations with problems

“Problems” is too generic, faults, offline status, or complaints?

List stations with more than 3 complaints or faults reported in the past month.

Define “problems” and set a measurable threshold.

Energy trends

Vague, trend by user, station, region, or time?

Display monthly energy dispensed across all stations from Jan to May 2025.

Define the scope and the trend interval.

Usage efficiency

Unclear term, efficiency of what? Station? Charger?

Display stations with energy dispensed per session greater than 25 kWh in May 2025.

Translate “efficiency” into a measurable metric.

How many complaints?

About what, station issues, billing, app? From drivers or site hosts?

How many driver complaints were submitted in April 2025 regarding charging failures?

Specify complaint type, source, and time frame.

Successful sessions

Success by what criteria, completed sessions, no errors, payment processed?

Display the number of completed charging sessions without faults in Q1 2025.

Define what “successful” means.

Charger downtime

Vague, which chargers? When? How much downtime?

List chargers with more than 2 hours of downtime in the last 30 days.

Always include the time frame and unit of measurement.

What’s trending?

Too broad, stations, users, features?

Which charging stations had the highest growth in session count from April to May 2025?

Specify what you want to trend and the comparison window.

Where is the highest demand?

Unclear, demand in sessions, energy, or station usage?

Display the cities with the highest number of charging sessions in Q1 2025.

Define “demand” and the level of aggregation.

Active drivers

Over what time frame? What does “active” mean?

List drivers who completed at least one charging session in May 2025.

Clarify what qualifies as “active.”

What’s causing failures?

Ambiguous, failures in sessions, hardware, payments?

What were the most frequent fault codes for failed charging sessions in April 2025?

Specify type of failure and the source.

Charging speed

Unclear — do you mean average kW, peak power, or duration?

Display average charging speed (kW) per session in May 2025.

Define whether you're referring to rate (kW) or time (duration).

Top complaints by user

Ambiguous — are you asking for complaint volume, type, or user satisfaction?

List top 5 complaint types reported by drivers in Q1 2025.

Be specific about the complaint metric and audience.

Driver satisfaction

Vague — satisfaction based on what, survey, NPS (Net Promoter Score), or session success?

Display average driver satisfaction score from surveys in 2024.

Clarify the source and measurement of satisfaction.

Charger availability

Availability in what sense — uptime, number of available ports?

List stations with availability above 95% in the last 30 days.

Define how you measure availability.

Repeat users

Does this refer to return frequency, same location, or session count?

Display drivers with more than 5 sessions in the same location in April 2025.

Clarify what makes a user "repeat".

Unusual activity

Too subjective — unusual by what standard?

List stations with session counts 2x higher than their monthly average in May 2025.

Define what qualifies as “unusual” behavior or performance.

Idle time

Unclear if referring to time after session ends, or between sessions.

Display average post-charging idle time per session in April 2025.

Be clear on what phase of usage "idle" refers to.

Missed sessions

Ambiguous — do you mean failed attempts, driver no-Displays, or technical issues?

List the number of failed charging session attempts due to connector faults in May 2025.

Clarify what constitutes a "missed" session.

Network load

Unclear — load in terms of energy, sessions, or concurrency?

Display peak concurrent session count per region in May 2025.

Define how you are measuring “load.”

Under performing chargers

Vague — under performing by what metric?

List chargers with less than 50% uptime in the last 30 days.

Specify what performance means and provide a threshold.

All data for Station 102

Too broad — what data do you want? Sessions? Revenue? Up time?

Display total sessions, revenue, and uptime for Station 102 in May 2025.

Be specific about the type of data you're looking for.

Top drivers in the East

Vague — top by what? Sessions? Revenue?

List top 10 drivers in the Eastern region by session count in Q1 2025.

Define region, metric, and time frame.

Display billing issues

Unclear — what kind of issues? For which users? What period?

List billing complaints submitted by drivers in April 2025.

Specify issue type and audience.

Most active region

Ambiguous — active by what metric?

Which region had the highest number of sessions in Q1 2025?

Clarify what makes a region "active".

Which connectors are bad?

"Bad" is vague — do you mean faulty, slow, or often idle?

List connectors with over 5 faults in the last 30 days.

Use measurable terms like faults or error codes.

Energy consumption

Too generic — across what? Stations? Drivers? Time?

Display total energy consumption by all stations in May 2025.

Include who, what, and when for the energy data.

Station traffic

“Traffic” is unclear — do you mean footfall, sessions, or concurrent users?

Display average session count per day at Station 301 in April 2025.

Use a specific, quantifiable metric like “session count”.

Compare charger types

Too vague — compare what? Usage, faults, uptime?

Compare session counts between Level 2 and DC Fast chargers in Q1 2025.

Define what aspect you’re comparing and the charger types.

Display usage by model

Unclear — usage of what? Sessions, energy? Which model?

Display energy dispensed by CT500 chargers in Q2 2025.

Clarify product model, metric, and time range.

Are users satisfied?

Subjective and broad — satisfaction by what measure?

What was the average NPS (Net Promoter Score) from driver surveys in April 2025?

Define satisfaction metric and source.