|
Novi termin za kurs pocinje 19.11.2025. Do kraja oktobra na sve prijave 20% popust! Cekamo Vas!
|
Please check your Spam, Promotions and other folders in your email to make sure you’ve received the consultation confirmation email. If you find it there, please mark it as "Not spam" so you can receive our future emails directly in your inbox.
It is the process of automatic data validation that is transmitted or returned via an API, with the aim of ensuring:
Validity (for example, all emails have a valid format)
Completeness (no empty mandatory fields)
Consistency (the date format is always the same, IDs are unique)
Accuracy (data matches the expected source of truth)
Freshness (no outdated data, e.g., the last updated date is not older than X days)
Response schema validation - Does the API return all the required data in the correct format?
Data integrity - Are the data correctly linked (e.g., the user has a valid address)?
Data quality rules - Do the values adhere to business rules?
Edge cases - What happens if some data is invalid, empty, or incomplete?
Cross-source validation - Comparing the API responses with the database or other services.
The data comes in S3 (CSV).
Glue DataBrew It automatically triggers profiling.
Lambda the function triggers Deequ validation.
The results send an alert to CloudWatch + SNS (email/slack) If there is an error.
Train set - used for training the model.
Validation set - used for hyperparameter tuning and preventing overfitting.
Test set - used for final evaluation of the model's performance.
Depends on the type of problem:
Accuracy - the proportion of correctly predicted classes.
Precision, Recall, F1-score
ROC AUC - area under the ROC curve.
Confusion matrix - displays TN, TP, FN, FP.
Data loading and basic overview
Identification of missing values
Checking data types and duplicates
Outlier detection
Consistency check and data validation
What is AI and Why it Matters to QA Professionals
Difference Between Traditional Software and AI Systems
Examples where AI is used (chatbots, recommendations, vision, text classification, etc.)
Definition and Goals of AI Testing
Difference Between “Testing AI Systems” and “Using AI for Testing”
Which components are tested in AI models (data, models, predictions, performance)
Non-functionality and non-linear behavior of the model
Unpredictability and “black-box” nature
Bias and unfair decisions made by the model
Lack of deterministic expectations
Data Testing (Data Quality)
Model Testing (Accuracy, Precision)
Model Behavior Testing (Boundary Testing)
Evaluation of Metrics and Errors
Python + Pandas
We use cookies to improve your experience on our site. By using our site, you consent to cookies.
Websites store cookies to enhance functionality and personalise your experience. You can manage your preferences, but blocking some cookies may impact site performance and services.
Essential cookies enable basic functions and are necessary for the proper function of the website.
Statistics cookies collect information anonymously. This information helps us understand how visitors use our website.
Marketing cookies are used to follow visitors to websites. The intention is to show ads that are relevant and engaging to the individual user.