This project provides a web interface to visualize temperature and humidity data using charts. It features interactive charts for temperature and humidity, and dynamically displays icons based on the current weather conditions and time of day.
- Interactive Charts: Displays temperature and humidity data over time using Chart.js.
- Dynamic Icons: Shows icons representing the current weather (sun/moon) and temperature (hot/cold).
- Responsive Design: Optimized for various screen sizes.
Before running this project, ensure you have the following installed:
project/
├── app.py # Main entry point
├── config/
│ ├── __init__.py
│ └── settings.py # Configuration and environment variables
├── models/
│ ├── __init__.py
│ └── database.py # Database connections and decorators
├── services/
│ ├── __init__.py
│ ├── sensor_service.py # SensorService
│ ├── air_quality_service.py # AirQualityService
│ ├── network_service.py # NetworkService
│ ├── train_service.py # TrainService
│ ├── todolist_service.py # TodolistService
│ └── ssh_service.py # SSHService
├── api/
│ ├── __init__.py
│ ├── sensor_routes.py # Routes for sensor endpoints
│ ├── air_quality_routes.py # Routes for air quality endpoints
│ ├── network_routes.py # Routes for network devices
│ ├── train_routes.py # Routes for trains
│ ├── todolist_routes.py # Routes for to-do list
│ ├── security_routes.py # Routes for security
│ ├── system_routes.py # Routes for system/backup/SSH
│ └── expense_routes.py # Routes for expenses
├── utils/
│ ├── __init__.py
│ ├── json_encoder.py # Custom JSON encoder
│ └── decorators.py # Common decorators
├── templates/ # Existing HTML templates
│ ├── index.html
│ ├── temperature.html
│ ├── umid.html
│ ├── train.html
│ ├── air_quality.html
│ ├── raspi.html
│ ├── security.html
│ ├── expenses.html
│ └── index-lista.html
├── static/ # Existing static files
│ └── favicon.ico
├── client/ # Existing client classes
│ ├── PostgresClient.py
│ └── MongoClient.py
├── scraper.py # Web scraping script
├── send_email.py # Email sending script
├── expenses_gsheet.py # Google Sheets expenses handler
├── gcredentials.json # Google API credentials file
└── requirements.txt # Project dependencies
-
Clone the Repository
git clone https://github.com/0ri0nRo/SmartHouse.git
Use Docker Compose to build and start the container. This command will also start any dependencies defined in the docker-compose.yml file.
docker-compose up -d --build
# To backup your data in postgres docker
sudo src/backup.sh This command will:
- Build the Docker image as defined in the Dockerfile.
- Create and start a container based on the built image.
- Expose the application on port 5000.
If you need to restore your PostgreSQL database from a .sql backup file, follow these steps:
First, ensure the backup file (e.g., backup.sql) is located in the src directory. Then, use the following docker cp command to copy it into the PostgreSQL container:
docker cp ./src/backup.sql <container_id>:/backup.sqlAfter copying the backup file into the container, you can restore it using the following command:
docker exec -i <container_id> psql -U postgres -d <YOUR_DATABASE> -f /backup.sql- Separated concerns into distinct modules
- Each route group has its own blueprint
- Services contain business logic
- Database operations are centralized
- All environment variables centralized in
config.py - Multiple environment support (dev, prod, test)
- Type conversion and validation
- Consistent error handling with decorators
- Proper logging throughout the application
- Database connection error handling
- Input validation
- Following Flask best practices
- Application factory pattern
- Blueprint registration
- Service layer pattern
- Clear separation of concerns
- Consistent naming conventions
- Comprehensive logging
- Easy to test and extend
pip install -r requirements.txtcp .env.example .env
# Edit .env with your configuration# Run database migrations
python -c "from app import create_app; app = create_app(); app.postgres_handler.create_all_tables()"# Development
python app.py
# Production
gunicorn -w 4 -b 0.0.0.0:5000 "app:create_app()"- Scalability: Easy to add new features and routes
- Maintainability: Clear code structure and separation of concerns
- Testability: Each component can be tested independently
- Reusability: Services can be reused across different routes
- Error Handling: Consistent error handling and logging
- Configuration: Centralized configuration management
- Performance: Better resource management and connection pooling
Flask==2.3.3
Flask-CORS==4.0.0
psycopg2-binary==2.9.7
pymongo==4.5.0
python-nmap==0.7.1
psutil==5.9.5
paramiko==3.3.1
python-dotenv==1.0.0
redis==4.6.0
requests==2.31.0
google-api-python-client==2.100.0
google-auth==2.23.0
google-auth-oauthlib==1.1.0
google-auth-httplib2==0.1.1
# Flask Configuration
FLASK_DEBUG=False
SECRET_KEY=your-secret-key-here
# Database Configuration
DB_HOST=localhost
DB_DATABASE=smart_home
DB_USER=postgres
DB_PASSWORD=your-db-password
# MongoDB Configuration
MONGO_URI=mongodb://localhost:27017/
# Email Configuration
SMTP_SERVER=smtp.gmail.com
SMTP_PORT=587
EMAIL_USERNAME=your-email@gmail.com
EMAIL_PASSWORD=your-app-password
# Google Sheets Configuration
GOOGLE_CREDENTIALS_PATH=./gcredentials.json
GOOGLE_SHEET_NAME=My NW
# Raspberry Pi SSH Configuration
HOST_PI=192.168.1.100
PORT_PI=22
USERNAME_PI=pi
# Redis Configuration
REDIS_HOST=redis
REDIS_PORT=6379
# Network Configuration
NETWORK_RANGE=192.168.178.0/24
# System Configuration
BACKUP_SCRIPT_PATH=/usr/local/bin/backup.sh
# Train API Configuration
TRAIN_API_URL=https://iechub.rfi.it/ArriviPartenze/ArrivalsDepartures/Monitor?placeId=2416&arrivals=False-- Create sensor readings table
CREATE TABLE IF NOT EXISTS sensor_readings (
id SERIAL PRIMARY KEY,
temperature_c DECIMAL(5,2) NOT NULL,
humidity DECIMAL(5,2) NOT NULL,
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Create network devices table
CREATE TABLE IF NOT EXISTS network_devices (
id SERIAL PRIMARY KEY,
ip_address VARCHAR(45) NOT NULL,
hostname VARCHAR(255),
status VARCHAR(50),
timestamp TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
);
-- Create trains table
CREATE TABLE IF NOT EXISTS trains (
id SERIAL PRIMARY KEY,
train_number VARCHAR(50),
destination TEXT,
time TIME,
delay INTEGER,
platform VARCHAR(10),
stops TEXT,
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Create air quality table
CREATE TABLE IF NOT EXISTS air_quality (
id SERIAL PRIMARY KEY,
smoke DECIMAL(10,2) NOT NULL,
lpg DECIMAL(10,2) NOT NULL,
methane DECIMAL(10,2) NOT NULL,
hydrogen DECIMAL(10,2) NOT NULL,
air_quality_index DECIMAL(10,2) NOT NULL,
air_quality_description VARCHAR(100) NOT NULL,
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Create alarms status table
CREATE TABLE IF NOT EXISTS alarms_status (
id SERIAL PRIMARY KEY,
status BOOLEAN NOT NULL,
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
-- Create indexes for better performance
CREATE INDEX IF NOT EXISTS idx_sensor_readings_timestamp ON sensor_readings(timestamp);
CREATE INDEX IF NOT EXISTS idx_network_devices_timestamp ON network_devices(timestamp);
CREATE INDEX IF NOT EXISTS idx_trains_time ON trains(time);
CREATE INDEX IF NOT EXISTS idx_trains_destination ON trains(destination);
CREATE INDEX IF NOT EXISTS idx_air_quality_timestamp ON air_quality(timestamp);
CREATE INDEX IF NOT EXISTS idx_alarms_status_timestamp ON alarms_status(timestamp);version: '3.8'
services:
app:
build: .
ports:
- "5000:5000"
environment:
- FLASK_ENV=production
env_file:
- .env
depends_on:
- postgres
- redis
- mongo
volumes:
- ./gcredentials.json:/app/gcredentials.json:ro
- ./backup.sh:/usr/local/bin/backup.sh:ro
postgres:
image: postgres:15
environment:
POSTGRES_DB: ${DB_DATABASE}
POSTGRES_USER: ${DB_USER}
POSTGRES_PASSWORD: ${DB_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./migrations/init_tables.sql:/docker-entrypoint-initdb.d/init_tables.sql
ports:
- "5432:5432"
mongo:
image: mongo:6
volumes:
- mongo_data:/data/db
ports:
- "27017:27017"
redis:
image: redis:7-alpine
volumes:
- redis_data:/data
ports:
- "6379:6379"
volumes:
postgres_data:
mongo_data:
redis_data:FROM python:3.11-slim
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y \
gcc \
nmap \
&& rm -rf /var/lib/apt/lists/*
# Copy requirements and install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Create non-root user
RUN useradd -m -u 1000 app && chown -R app:app /app
USER app
# Expose port
EXPOSE 5000
# Run application
CMD ["python", "app.py"]import unittest
from app import create_app
from config import TestingConfig
class FlaskAppTestCase(unittest.TestCase):
def setUp(self):
self.app = create_app()
self.app.config.from_object(TestingConfig)
self.client = self.app.test_client()
self.app_context = self.app.app_context()
self.app_context.push()
def tearDown(self):
self.app_context.pop()
def test_index_route(self):
response = self.client.get('/')
self.assertEqual(response.status_code, 200)
def test_api_sensors_no_data(self):
response = self.client.get('/api/sensors')
self.assertIn(response.status_code, [200, 404])
if __name__ == '__main__':
unittest.main()from app import create_app
# Create and run the application
app = create_app()
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000, debug=True)# Create new blueprint in routes/new_feature.py
from flask import Blueprint
new_feature_bp = Blueprint('new_feature', __name__)
@new_feature_bp.route('/new_endpoint')
def new_endpoint():
return jsonify({'message': 'New feature'})
# Register in app.py
from routes.new_feature import new_feature_bp
app.register_blueprint(new_feature_bp, url_prefix='/api')# Create service in services/new_service.py
class NewService:
def __init__(self, db_handler):
self.db = db_handler
def do_something(self):
# Business logic here
pass
# Initialize in app.py
app.new_service = NewService(app.postgres_handler)This refactored structure provides a much more maintainable, scalable, and professional codebase while preserving all the original functionality.>