A full-stack web application built with Next.js that leverages Microsoft Azure AI Vision to analyze uploaded car photos and identify their type. Users can upload images of cars, and the application will use AI to predict the vehicle's classification.
- AI-Powered Car Type Identification: Utilizes Azure AI Vision service to analyze car images and determine their type (e.g., Sedan, SUV, Truck).
- Image Upload: User-friendly interface for uploading car photographs.
- Results Display: Clearly presents the identified car type and potentially other relevant information from the AI analysis.
- Modern Web Stack: Built with Next.js (React) for a fast and responsive user experience.
- Server-Side Logic: API routes in Next.js handle communication with the Azure AI Vision service.
- Type Safety: Developed with TypeScript for enhanced code quality and maintainability.
- Styled with Tailwind CSS: Modern and utility-first CSS framework.
| Desktop |
|---|
![]() |
https://www.loom.com/share/0bc3928fe023479495ecb9e552a16f57?sid=b3b80ef7-1859-4f5a-b05a-d5457ca51e38
- Upload: The user uploads an image of a car through the web interface.
- Backend Processing: The image is sent to a Next.js API route on the server.
- Azure AI Vision Analysis: The backend communicates with the Azure AI Vision service, sending the image for analysis.
- Data Retrieval: Azure AI Vision processes the image and returns analysis data, which can include object detection, image categorization, or tags relevant to identifying the car type.
- Result Interpretation: The backend processes the response from Azure to extract the most likely car type.
- Display: The identified car type is sent back to the frontend and displayed to the user.
Core Framework & Libraries:
- Framework: Next.js (v13.4) - The React Framework for Production.
- UI Library: React (v18)
- DOM Rendering: React DOM (v18)
AI & Cloud Services:
- Microsoft Azure AI Vision: The core AI service for image analysis and car type identification. (Likely using the Azure SDK for JavaScript/TypeScript or REST APIs).
Backend & Database (if applicable for storing results/user data):
- Database ODM: Mongoose (v8.0.3) - Elegant MongoDB object modeling.
- Database Driver: MongoDB Native Driver (v6.3.0)
- Environment Variables: dotenv (v16.3.1)
Language & Styling:
- Language: TypeScript (v5)
- Styling: Tailwind CSS (v3.3.0)
- CSS Processing: PostCSS (v8), Autoprefixer (v10.0.1)
Development & Tooling:
- Linting: ESLint (v8) with
eslint-config-next(v14.0.3) - Type Definitions: For Node, React, Mongoose.
To get a local copy up and running, follow these steps.
- Node.js (v16.x or newer recommended)
- npm (comes with Node.js) or Yarn
- Azure Account: An active Microsoft Azure subscription.
- Azure AI Vision Resource: You need to have an AI Vision (or Cognitive Services) resource created in your Azure portal. You'll need the Endpoint and one of the Keys for this resource.
- MongoDB instance (Optional, if you plan to store analysis results or user data)
-
Clone the repository:
git clone https://github.com/jericrealubit/missionready-m2.git cd missionready-m2 -
Install dependencies: Using npm:
npm install
Or using Yarn:
yarn install
-
Set up Environment Variables: Create a
.env.localfile in the root of your project. This file is ignored by Git.# .env.local # Azure AI Vision Credentials AZURE_VISION_ENDPOINT=your_azure_vision_endpoint_url AZURE_VISION_KEY=your_azure_vision_api_key # MongoDB Connection (Optional - if used) # MONGODB_URI=your_mongodb_connection_string # Add any other environment variables your application needs # NEXT_PUBLIC_SOME_API_KEY=your_public_api_key
Replace placeholders with your actual Azure AI Vision endpoint and key, and MongoDB URI if used.
In the project directory, you can run:
Runs the app in development mode. Open http://localhost:3000.
Builds the app for production to the .next folder.
Starts the production server (requires npm run build first).
Runs ESLint to analyze your code.
missionready-m2/
├── .next/
├── node_modules/
├── pages/
│ ├── api/
│ │ └── identify-car.ts # Example API route for Azure AI Vision interaction
│ ├── _app.tsx
│ ├── _document.tsx
│ └── index.tsx # Main page for image upload and results
├── public/
├── src/ # Optional: For components, utils, Azure SDK integration logic
│ ├── components/
│ └── services/
│ └── azureVisionService.ts # Logic for interacting with Azure AI Vision
├── styles/
├── .env.local
├── .eslintrc.json
├── next.config.js
├── package.json
├── postcss.config.js
├── tailwind.config.ts
├── tsconfig.json
└── README.md(The project structure includes a suggestion for where Azure service logic might reside).
The Next.js API routes (e.g., pages/api/identify-car.ts) will be responsible for:
- Receiving the uploaded image from the client.
- Securely authenticating and sending the image data to the Azure AI Vision API.
- Processing the JSON response from Azure.
- Sending the relevant car type information back to the client.
Contributions are welcome! Please follow these steps:
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is currently unlicensed. Consider adding an open-source license if you wish.
