Sooraj Ottapalam Ramachandran пре 6 дана
комит
bf3d76be6a
55 измењених фајлова са 4655 додато и 0 уклоњено
  1. 368 0
      Allinone.txt
  2. 123 0
      Full.py
  3. 127 0
      Full2.py
  4. 112 0
      General.py
  5. 165 0
      Generalwakeup.py
  6. BIN
      Gigabee.mp3
  7. 55 0
      Gigabeeprotect.txt
  8. BIN
      Hydrosense.mp3
  9. 55 0
      Hydrosense.txt
  10. 53 0
      IoTConfiguratorSolution.txt
  11. 194 0
      Labtourmode.py
  12. BIN
      NetworkAnalyser.mp3
  13. 37 0
      Networkanalyzer.txt
  14. BIN
      Pushtotalk.mp3
  15. 52 0
      Pushtotalk.txt
  16. 150 0
      Qrcode.py
  17. BIN
      RFID.mp3
  18. 50 0
      RFIDautomationenabler.txt
  19. BIN
      Welcome.mp3
  20. 71 0
      button.py
  21. 109 0
      checking.py
  22. 40 0
      device_data.csv
  23. 952 0
      device_data.txt
  24. 136 0
      generalnew.py
  25. BIN
      give.mp3
  26. BIN
      good.mp3
  27. BIN
      haha.mp3
  28. 127 0
      hm.py
  29. BIN
      hmm.mp3
  30. BIN
      iotconfig.mp3
  31. 172 0
      latestfetch.py
  32. 28 0
      ml.py
  33. BIN
      mode1.mp3
  34. BIN
      mode2.mp3
  35. 126 0
      nao.py
  36. 187 0
      newcache.py
  37. 116 0
      newfetch.py
  38. BIN
      next.mp3
  39. 153 0
      nokam.py
  40. BIN
      please.mp3
  41. 35 0
      prediction.py
  42. 57 0
      qrcode123.py
  43. BIN
      smartsanitiser.mp3
  44. 49 0
      smartsanitiserdispenser.txt
  45. BIN
      speech1.mp3
  46. BIN
      sure.mp3
  47. BIN
      sure1.mp3
  48. 28 0
      test.py
  49. 58 0
      testcamera.py
  50. 69 0
      testing.py
  51. BIN
      th.mp3
  52. BIN
      ty.mp3
  53. BIN
      wifi_model.pkl
  54. 33 0
      wifi_scan.py
  55. 568 0
      wifi_signals.csv

+ 368 - 0
Allinone.txt

@@ -0,0 +1,368 @@
+Overview of Network Analyzer (+)
+The Network Analyzer (+) is a device developed to test and analyze the availability and quality of Low Power Wide Area Networks (LPWANs) such as NB-IoT (Narrowband Internet of Things) and LTE-M. This tool is essential for partners and customers to validate network coverage and performance for various IoT projects.
+
+Background and Purpose
+LPWANs are known for their extensive cell coverage, which makes them suitable for areas with poor network quality. However, testing the availability and quality of LPWANs across multiple planned project locations can be costly and time-consuming. The Network Analyzer (+) addresses this issue by allowing partners and customers to perform these tests independently. This device provides real-time data on signal strength and quality, facilitating a more efficient assessment process.
+
+Technical Details
+The Network Analyzer (+) is equipped with the following technical features:
+
+Reference Signal Received Power (RSRP): Measures the power level received from a reference signal.
+Reference Signal Received Quality (RSRQ): Assesses the quality of the received reference signal.
+Timestamp: Records the time of each measurement.
+Solution Description
+The Network Analyzer (+) functions by transmitting data on signal strength and quality for LPWANs (NB-IoT Cat-M1) and GSM at regular intervals to a server. The collected data can be viewed and analyzed through a dashboard, where it can be filtered by time. Additionally, the device includes an integrated E-paper display that shows the current measured values with a quality indication through bars, providing an immediate visual representation of the network conditions.
+
+Application Opportunities
+The Network Analyzer (+) offers several practical applications:
+
+IoT Investment Validation and Assurance: Ensures that the network infrastructure can support IoT deployments, providing confidence in the investment.
+Building Trust in the Network: Demonstrates the reliability and quality of the network to potential users.
+Self-Testing Solution: Allows service providers to test network coverage and quality independently without requiring extensive external support.
+Project Timeline
+The production timeline for the Network Analyzer (+) is as follows:
+
+April 2023: 500 devices produced
+End of May 2023: Full rollout planned
+Status and Reusability
+Current Status: In production
+Location: Germany
+Type: Minimum Viable Product (MVP)
+Reusability: Yes, the device can be reused for multiple projects and locations.
+Key Points of Contact
+For more information or to express interest in the Network Analyzer (+), the main point of contact is:
+
+SPOC: Tim Schaerfke
+Conclusion
+The Network Analyzer (+) is a crucial tool for validating and ensuring the quality of LPWAN networks. It empowers partners and customers to conduct their own network assessments, thereby saving time and reducing costs. By providing real-time data and an easy-to-use interface, the Network Analyzer (+) enhances the ability to plan and deploy IoT solutions with confidence.
+
+
+Overview of GigaBee Protect Solution
+The GigaBee Protect solution is a state-of-the-art IoT-based system designed to protect beehives from theft and environmental hazards. It is a compact, battery-operated device that provides beekeepers with real-time monitoring and alerts, ensuring the safety and security of their bee colonies.
+
+Background and Purpose
+Current methods of protecting beehives face several challenges:
+
+Theft and Vandalism: Beehives are susceptible to theft and vandalism, causing significant losses for beekeepers.
+Environmental Hazards: Beehives can be adversely affected by environmental conditions, including falls or displacement due to weather events.
+Limited Monitoring: Traditional methods offer limited monitoring capabilities, making it difficult for beekeepers to respond promptly to threats.
+The GigaBee Protect solution addresses these issues by providing a reliable, self-sufficient monitoring system that enhances beehive security and environmental resilience.
+
+Technical Details
+Hardware Components:
+Sensors: An accelerometer (LIS2DH12) for detecting movement and environmental changes.
+Microcontroller: A NORDIC nRF9160 microcontroller to manage device operations and connectivity.
+Modem: Utilizes LTE-M and NB-IoT for cellular positioning and communication.
+Battery: Designed to ensure the device remains operational for over 12 months on a single charge.
+Protective Casing: Ensures durability and protection against environmental factors.
+Software:
+Firmware: Developed using efficient programming to ensure low power consumption and robust performance.
+Cloud Platform: Allows remote monitoring and data visualization for beekeepers.
+Connectivity:
+Cellular Technology: Supports LTE-M and NB-IoT for reliable network connections, ensuring the device remains operational even in remote areas.
+Battery Monitoring: Constantly tracks battery levels to ensure continuous operation.
+Network Status Monitoring: Continuously checks network availability and quality to maintain stable connections.
+Solution Description
+The GigaBee Protect device uses advanced IoT technology to provide a stable and reliable monitoring system for beehives. Key features include:
+
+Movement Detection: Alerts beekeepers if the hive is moved or displaced.
+Environmental Monitoring: Provides data on environmental conditions affecting the hive.
+Battery Life: Long-lasting battery designed for over 12 months of operation without recharging.
+Real-time Alerts: Sends immediate notifications to beekeepers in case of theft, displacement, or environmental threats.
+Application Opportunities
+The GigaBee Protect solution is versatile and can be deployed in various settings:
+
+Apiaries: Protects commercial and private beehives from theft and environmental threats.
+Agricultural Areas: Ensures the safety of bee colonies in agricultural settings where pollination is critical.
+Remote Locations: Suitable for beehives located in remote or hard-to-reach areas where regular monitoring is challenging.
+Project Timeline
+06/21: Project inception and initial design phase.
+08/21: Installation of devices in 10 Vodafone beehives.
+08/21 - 11/21: Testing and showcasing at various events, including a press conference and Youtopia event in Cologne.
+Ongoing: Further development, testing, and integration based on feedback.
+Future Development
+The current status of the GigaBee Protect solution is in active use and ongoing development. Future plans include:
+
+Scaling Production: Aiming to increase production by 2023.
+Feature Enhancements: Adding capabilities such as a foulbrood detector, smoke detector, and a beekeeper app for detailed hive parameter visualization.
+Productization: Securing funding and completing patent processes within the next 2 to 4 years.
+Key Points of Contact
+For further information or to express interest in the GigaBee Protect solution, the main point of contact is:
+
+SPOC: Vodafone IoT Future Lab Team
+Conclusion
+The GigaBee Protect solution offers a robust and reliable monitoring system for beekeepers, ensuring the safety and security of their beehives. With continuous monitoring of movement and environmental conditions, the GigaBee Protect device provides peace of mind, significantly enhancing response times and protection against theft and hazards.
+
+
+
+Overview of HydroSense
+HydroSense is an innovative Internet of Things (IoT) solution developed for the continuous and real-time monitoring of lake water quality. This system employs a buoy equipped with various sensors to measure multiple water parameters, offering a more efficient and timely assessment compared to traditional manual sampling methods.
+
+Background and Purpose
+Traditional methods of analyzing lake water quality involve manual sampling, typically performed only a few times per year. This sporadic testing results in a lack of real-time data, making it difficult to monitor ongoing pollution and environmental changes. The European guideline EG/2000/60 highlights the necessity for automated monitoring solutions, prompting the development of HydroSense to address this need.
+
+Technical Details
+Hardware Components:
+
+Sensors: The buoy is equipped with four sensors:
+One single sensor
+Three sensors integrated into a multiparameter sonde
+Power Supply: The system is powered by solar panels, ensuring a sustainable and continuous power source for the sensors and other components.
+Software and Connectivity:
+
+Dashboard: Data collected by the sensors is transmitted to the IoT Future Lab Dashboard, where it is visualized and analyzed.
+Connectivity: Uses Narrowband IoT (NB-IoT) for reliable and efficient data transmission from the buoy to the dashboard.
+Solution Description
+HydroSense monitors the following water parameters:
+
+Temperature
+pH Level
+Electrical Conductivity
+Redox Potential
+Dissolved Oxygen
+Turbidity
+The buoy’s design, including its case and swim body, is 3D printed to ensure cost-effective production and deployment. The collected data is sent over NB-IoT and displayed on a dashboard, allowing for real-time monitoring and analysis of lake water quality.
+
+Application Opportunities
+The HydroSense solution can be deployed in any lake, making it highly versatile. Potential applications include:
+
+Municipal Use: Cities and communities can use HydroSense to monitor local water bodies.
+Environmental Restoration: Companies involved in environmental restoration projects can leverage this technology to assess and improve lake conditions.
+Project Timeline
+The development and implementation timeline for HydroSense includes:
+
+07 June 2022: Project initiation
+20 July 2022: Arrival of sensors
+05 August 2022: Completion of swim body and case
+05 September 2022: Project completion
+08 September 2022: Implementation with Landesanstalt für Umwelt Baden-Württemberg (LUBW) at Bodensee
+Showcase Features
+The HydroSense showcase is a scaled-down, interactive version designed for demonstrations:
+
+Measured Parameters: The showcase focuses on two key parameters: oxygen and temperature.
+Design: It features a 74-liter tank with synthetic plants, stones, and sand to simulate a natural lake environment. Sensors float in the water and send real-time data via 5G to a dashboard.
+Interactivity: Users can alter water conditions using two buttons:
+One button increases oxygen levels.
+Another button raises the water temperature.
+These changes are accompanied by bubbles and lights for an engaging and educational user experience.
+Future Development and Communication
+HydroSense's showcase is set for further testing and lab establishment by March 2023. Communication strategies include updates through Vodafone's external and internal channels, such as the newsroom and social media platforms, ensuring widespread awareness and engagement with the project.
+
+Conclusion
+HydroSense represents a significant advancement in automated environmental monitoring, providing real-time data and analysis to help manage and protect lake ecosystems. Its innovative design and technology make it a valuable tool for municipalities, environmental organizations, and restoration projects.
+
+
+
+Overview of IoT Configurator Solution
+The IoT Configurator Solution is an advanced platform developed by Vodafone, designed to facilitate the creation and deployment of IoT prototypes. This platform leverages IoT hardware cubes, connectivity technologies, and cloud services to enable rapid prototyping and scalable IoT solutions.
+
+Background and Purpose
+Vodafone Innovation Park in Germany has developed the IoT Configurator to address the complexities of IoT ecosystems and the need for end-to-end (E2E) perspectives in launching successful IoT products. The solution allows for the co-creation of innovative prototypes and solutions to unlock the full potential of IoT. It aims to provide customers with a comprehensive understanding of IoT, from the initial idea to the final product, emphasizing individualization and scalability.
+
+Technical Details
+Hardware Components:
+
+Sensors: Various sensors to capture environmental and operational data.
+Microcontroller Module: For processing and managing sensor data.
+Case: Enclosures to protect and house IoT components.
+Energy Source: Battery or other power sources to ensure continuous operation.
+3D Printing: Used for creating custom components and enclosures.
+Software and Data Visualization:
+
+Firmware: Manages sensor data collection and communication.
+Dashboard: Visualizes collected data for real-time monitoring and analysis.
+Web-App Integration: Facilitates user interaction and control over IoT prototypes.
+Connectivity Technologies:
+
+Cellular Connectivity: 2G, 4G, 5G, NB-IoT, and LTE-M for robust data transmission.
+Cloud Server: For data storage, processing, and analytics.
+Solution Description
+The IoT Configurator Solution offers an extensive introduction to IoT, covering the history, technical details, and development process. It provides customers the opportunity to develop their own IoT prototypes using interactive hardware cubes. The solution includes:
+
+E2E IoT Rapid Prototyping: From ideation to deployment, ensuring scalable solutions.
+Real-time Data Visualization: Through a user-friendly dashboard.
+Hands-on Experience: With customizable IoT hardware cubes.
+Application Opportunities
+The IoT Configurator can be utilized in various scenarios, including:
+
+VIP Lab Tours: Demonstrating IoT capabilities to potential clients.
+Customer Contact: Engaging customers with interactive IoT solutions.
+Fairs and Exhibitions: Showcasing IoT innovations and prototypes.
+Project Timeline
+Initial Idea of First IoT Showcase: January 2021
+Pitch and Agency Selection: March 2021
+Hardware Alignment and Exhibit Delivery: February 2021
+Content Preparation and Programming: Throughout 2021 and early 2022
+Estimated Completion of Showcase: February 2023
+Future Development
+The IoT Configurator is an ongoing project with future goals including:
+
+Extended Use Cases: Adapting the solution for broader applications across different industries.
+Productization: Moving from a showcase to a market-ready product.
+Enhanced Connectivity: Exploring advanced connectivity options for better performance.
+Key Points of Contact
+For more information or to express interest in the IoT Configurator Solution, please contact:
+
+Laura Biermann: Vodafone SPOC
+Conclusion
+The IoT Configurator Solution by Vodafone provides a robust platform for developing and deploying IoT prototypes. By combining hardware, software, and connectivity technologies, it offers a comprehensive approach to IoT development, enhancing transparency, analytical capabilities, and operational efficiency. This solution holds potential for widespread application across various industries, driving innovation and improving revenue generation.
+
+
+
+Overview of Push To Talk (PTT) Solution
+The Push To Talk (PTT) solution is an emergency call system designed for office spaces. It is a wall-mounted, battery-operated unit that enables quick and reliable communication with emergency services such as the police, fire service, or maintenance department.
+
+Background and Purpose
+Current emergency calling devices in office environments face several limitations:
+
+They are dependent on power supply and Voice over Internet Protocol (VoIP) systems, which can fail during power outages or network issues.
+Companies often lack visibility into the operational status of these devices, which can result in employees being at high risk during emergencies if the devices are not functioning properly.
+The PTT solution addresses these shortcomings by offering a more reliable, self-sufficient emergency communication system.
+
+Technical Details
+Hardware Components:
+
+Push Buttons: Three buttons designated for different emergency services.
+Connectivity Board: A custom-developed board that facilitates communication.
+Modem: A Quectel modem for network connectivity.
+Battery: Ensures the device remains operational even during power outages.
+Microphone and Speaker: For clear audio communication.
+Microcontroller: Manages the device’s operations and connectivity.
+Software:
+
+Developed using React and C++ to ensure robust performance.
+Connectivity:
+
+Supports 4G and 2G Circuit Switched Fallback (CSFB) for reliable network connections.
+Solution Description
+The PTT device uses cellular technology to ensure a stable and reliable connection, even when traditional power and network systems fail. Key features include:
+
+Battery Monitoring: The device constantly monitors its battery level to ensure it is always ready for use.
+Network Status Monitoring: Continuously checks network availability and quality.
+Emergency Buttons: Three push buttons allow users to call specific emergency services directly.
+Modem Functionality: The modem actively searches for the best available connection to ensure successful communication.
+Application Opportunities
+The PTT solution is highly versatile and can be deployed in various settings:
+
+Office Buildings: Ensures employee safety on every floor by providing immediate access to emergency services.
+Public Places: Useful in locations that require constant maintenance and where quick communication with emergency services is crucial.
+Hazardous Environments: Suitable for areas where mobile phones are not permitted, but emergency communication is necessary.
+Project Timeline
+04/22: Start of the project, with the initial design of the audio circuit.
+05/22 - 08/22: Testing of the first Printed Circuit Board (PCB) for audio functionality and official review of the audio circuit from Quectel.
+09/22: Designing the second version of the audio circuit, integrating it, and completing the 3D design.
+Ongoing Tasks: Further testing and integration based on initial results and feedback.
+Future Development
+The current status of the PTT solution is a Proof of Concept (PoC), and efforts are underway to secure funding for productization. The estimated timeline for patent completion is 2 to 4 years.
+
+Key Points of Contact
+For further information or to express interest in the PTT solution, the main point of contact is:
+
+SPOC: TETI Tim Schaerfke
+Conclusion
+The Push To Talk solution provides a robust and reliable emergency communication system for office spaces and other environments where traditional methods may fail. With continuous monitoring of battery levels and network status, the PTT device ensures that help is always just a button press away, significantly enhancing safety and response times during emergencies.
+
+
+Overview of RFID Automation Enabler Solution
+The RFID Automation Enabler is an advanced IoT-based device designed to track samples, products, and commodities at production sites. It utilizes RFID technology combined with LTE-M connectivity to provide real-time tracking and data forwarding to a cloud instance, enhancing automation and inventory management for various industries.
+
+Background and Purpose
+Prezero, a German environmental services provider, faces challenges in tracking the ingredients of processed trash cubes for resale purposes. Traditional tracking methods are inefficient due to the attenuation of RFID signals through trash piles. The RFID Automation Enabler addresses these challenges by offering:
+
+Enhanced Tracking Accuracy: Mitigates signal attenuation issues.
+Real-time Data Transmission: Provides up-to-date tracking information.
+Improved Inventory Management: Helps streamline processes and optimize resources.
+Technical Details
+Hardware Components:
+RFID Reader: Thinkmagic M6E-NANO for reading RFID tags.
+Modem: BG95-M3 Modem (LTE-M) for reliable network connectivity.
+Battery: Lithium-ion battery ensures continuous operation.
+Extendable Rods: Allows the device to be positioned at optimal heights for accurate readings.
+Software:
+Firmware: Manages RFID reading and data transmission.
+Dashboard: Visualizes the collected data, showing the presence of Electronic Product Codes (EPCs).
+Connectivity
+LTE-M: Ensures robust data transmission to the cloud, even in challenging environments.
+Future Upgrade: Potential upgrade to RedCap devices for enhanced performance.
+Solution Description
+The RFID Automation Enabler reads all EPCs of nearby RFID tags and forwards this data through LTE-M to a backend system. A user-friendly dashboard visualizes the data, showing real-time status of tracked items. Key features include:
+
+Reliable RFID Reading: Overcomes signal attenuation issues in dense environments.
+Real-time Data Forwarding: Ensures immediate availability of tracking information.
+Ease of Deployment: Extendable rods facilitate easy positioning and testing on-site.
+Application Opportunities
+The RFID Automation Enabler can be deployed across various industries and environments:
+
+Production Sites: Tracks commodities and products within large manufacturing areas.
+Waste Management: Monitors the composition of processed trash cubes for resale.
+Logistics and Supply Chain: Enhances visibility and efficiency in inventory management.
+Project Timeline
+Project Start: Initial design and development phase.
+On-site Presentation: Demonstration planned to address signal attenuation concerns.
+Prototype Completion: Estimated by 1st February 2024.
+Future Development
+Currently a Proof of Concept (PoC), the RFID Automation Enabler is under development with plans for further testing and refinement. Future goals include:
+
+Improved RFID Range: Enhancing the device's ability to read tags through dense materials.
+Productization: Moving from PoC to a market-ready product.
+Extended Use Cases: Adapting the solution for broader applications across different industries.
+Key Points of Contact
+For further information or to express interest in the RFID Automation Enabler solution, the main points of contact are:
+
+Tim Schaerfke
+Laura Biermann
+Conclusion
+The RFID Automation Enabler provides a robust and efficient solution for tracking items in challenging environments. By combining RFID technology with LTE-M connectivity, it offers real-time tracking and data visualization, significantly improving inventory management and operational efficiency. This solution holds potential for widespread application across various industries, enhancing transparency, analytical capabilities, and revenue generation.
+
+
+Overview of Smart Sanitizer Dispenser Solution
+The Smart Sanitizer Dispenser is an IoT-based solution designed to enhance workplace hygiene by automating sanitizer dispenser management. This system ensures that sanitizer dispensers are always functional and filled, providing a reliable health security measure for companies of all sizes.
+
+Background and Purpose
+Ensuring compliance with pandemic-related health guidelines is a challenge for many companies, especially those with large workspaces like the Vodafone campus. Manual monitoring and refilling of sanitizer dispensers are labor-intensive and inefficient. The Smart Sanitizer Dispenser aims to:
+
+Automate Fill Level Monitoring: Eliminate the need for manual checks.
+Enhance Workplace Hygiene: Ensure dispensers are always filled and functional.
+Optimize Dispenser Placement: Use data analytics to improve dispenser locations.
+Technical Details
+Hardware Components:
+Microcontroller: Measures and calculates the sanitizer usage.
+Sensors: Detect the fill level of the dispenser.
+Connectivity Module: Transmits data over Narrowband-IoT (NB-IoT).
+Battery: Powers the device for continuous operation.
+Software:
+Firmware: Developed to efficiently monitor and transmit usage data.
+Dashboard: Provides a user-friendly interface for monitoring dispenser status and usage analytics.
+Connectivity
+Narrowband-IoT (NB-IoT): Ensures reliable data transmission even in areas with poor network coverage.
+Continuous Monitoring: Regularly updates fill level status and usage statistics.
+Solution Description
+The Smart Sanitizer Dispenser automates the monitoring process, ensuring that dispensers are always ready for use. Key features include:
+
+Automated Fill Level Control: Housekeeping staff can check fill levels via a dashboard, eliminating manual checks.
+Usage Analytics: Tracks how often dispensers are used, helping to optimize their placement and predict refill dates.
+Real-time Alerts: Notifies staff when a refill is needed or if there are any issues with the dispenser.
+Application Opportunities
+The smart fill level tracking solution is versatile and can be applied to various types of dispensers beyond sanitizers, offering potential for more complex Industry 4.0 applications:
+
+Office Buildings: Maintains hygiene in large corporate environments.
+Public Spaces: Ensures dispensers in high-traffic areas are always functional.
+Industrial Settings: Can be adapted for other dispensing needs in manufacturing and logistics.
+Project Timeline
+04/21: Project inception and initial design phase.
+12/07/21: Completion of initial showcase.
+Ongoing: Further evaluation and potential expansion to other types of dispensers.
+Future Development
+The Smart Sanitizer Dispenser solution has completed its prototype phase and is currently in use. Future developments include:
+
+Expansion: Adapting the technology for use with other dispensers.
+Enhanced Analytics: Improving data analytics capabilities for better insights.
+Integration: Potential integration with other smart building solutions.
+Key Points of Contact
+For further information or to express interest in the Smart Sanitizer Dispenser solution, the main point of contact is:
+
+SPOC: Leon Kersten
+Conclusion
+The Smart Sanitizer Dispenser offers a robust and efficient solution for maintaining hygiene in workplaces and public spaces. By automating fill level monitoring and providing real-time usage data, the system ensures that dispensers are always ready for use, enhancing health security and operational efficiency.
+
+The battery used here is 4.2V lithium ion battery.

+ 123 - 0
Full.py

@@ -0,0 +1,123 @@
+import pygame
+import speech_recognition as sr
+from openai import OpenAI
+from pathlib import Path
+import time
+
+# Initialize OpenAI API key
+api_key = 'sk-proj-wwWaxim1Qt13uqzSS0xjT3BlbkFJK0rZvx78AJiWG3Ot7d3S'
+client = OpenAI(api_key=api_key)
+
+def create_messages(question, file_content):
+    return [
+        {"role": "system", "content": "You are a helpful assistant who explains and answers about IoT use cases in Vodafone."},
+        {"role": "user", "content": f"{file_content}\n\nQ: {question}\nA:"}
+    ]
+
+def user_input():
+    while True:
+        try:
+            num = int(input("Enter a number from 1 to 3: "))
+            if 1 <= num <= 3:
+                return num
+            else:
+                print("Invalid input. Please enter a number between 1 and 3.")
+        except ValueError:
+            print("Invalid input. Please enter a valid integer.")
+
+def play_audio(num):
+    audio_files = {
+        1: "Gigabee.mp3",
+        2: "hydrosense.mp3",
+        3: "Pushtotalk.mp3",
+    }
+    
+    audio_file = audio_files.get(num)
+    
+    if audio_file:
+        pygame.mixer.init()
+        pygame.mixer.music.load(audio_file)
+        pygame.mixer.music.play()
+        while pygame.mixer.music.get_busy():
+            time.sleep(1)
+
+def recognize_speech():
+    recognizer = sr.Recognizer()
+    with sr.Microphone() as source:
+        print("Listening...")
+        audio = recognizer.listen(source)
+        try:
+            print("Recognizing...")
+            text = recognizer.recognize_google(audio, language='en-US')
+            print(f"You said: {text}")
+            return text
+        except sr.UnknownValueError:
+            print("Sorry, I did not understand that.")
+            return None
+        except sr.RequestError:
+            print("Sorry, there was an error with the speech recognition service.")
+            return None
+
+def get_response_from_openai(messages):
+    response = client.chat.completions.create(
+        model="gpt-3.5-turbo",
+        messages=messages,
+        max_tokens=150,
+        temperature=0.5,
+    )
+    return  (response.choices[0].message.content)
+
+def read_text_file(file_path):
+    with open(file_path, 'r') as file:
+        return file.read()
+
+def generate_speech(text, file_path):
+    speech_file_path = Path(file_path).parent / "speech.mp3"
+    with client.audio.speech.with_streaming_response.create(
+        model="tts-1",  # Replace with your actual model ID
+        input=text,
+        voice="alloy"  # Replace with a valid voice for your chosen model
+    ) as response:
+        response.stream_to_file(str(speech_file_path))
+    return str(speech_file_path)
+
+def start_qa_mode(file_content):
+    while True:
+        print("Please ask your question:")
+        question = recognize_speech()
+        if question:
+            messages = create_messages(question, file_content)
+            answer = get_response_from_openai(messages)
+            print(f"Answer: {answer}")
+
+            speech_file_path = generate_speech(answer, "speech.mp3")
+            pygame.mixer.music.load(speech_file_path)
+            pygame.mixer.music.play()
+            while pygame.mixer.music.get_busy():
+                time.sleep(1)
+
+            print("Do you want to ask another question? (Yes/No)")
+            user_choice = recognize_speech()
+            if user_choice and user_choice.lower() == "no":
+                break
+        else:
+            print("Sorry, I didn't get that. Please ask again.")
+
+def main():
+    while True:
+        num = user_input()
+        play_audio(num)
+
+        text_files = {
+            1: "Gigabeeprotect.txt",
+            2: "Hydrosense.txt",
+            3: "Pushtotalk.txt",
+        }
+
+        text_file = text_files.get(num)
+        if text_file:
+            file_content = read_text_file(text_file)
+            start_qa_mode(file_content)
+
+if __name__ == "__main__":
+    main()

+ 127 - 0
Full2.py

@@ -0,0 +1,127 @@
+import pygame
+import speech_recognition as sr
+from openai import OpenAI
+from pathlib import Path
+import time
+import os
+
+# Initialize OpenAI API key
+api_key = 'sk-proj-wwWaxim1Qt13uqzSS0xjT3BlbkFJK0rZvx78AJiWG3Ot7d3S'
+client = OpenAI(api_key=api_key)
+
+def create_messages(question, file_content):
+    return [
+        {"role": "system", "content": "You are a tour guide who explains and answers about IoT use cases in Vodafone."},
+        {"role": "user", "content": f"{file_content}\n\nQ: {question}\nA:"}
+    ]
+
+def user_input():
+    while True:
+        try:
+            num = int(input("Enter a number from 1 to 3: "))
+            if 1 <= num <= 3:
+                return num
+            else:
+                print("Invalid input. Please enter a number between 1 and 3.")
+        except ValueError:
+            print("Invalid input. Please enter a valid integer.")
+
+def play_audio(num):
+    audio_files = {
+        1: "speech1.mp3",
+        2: "hydrosense.mp3",
+        3: "Pushtotalk.mp3",
+    }
+    
+    audio_file = audio_files.get(num)
+    
+    if audio_file:
+        pygame.mixer.init()
+        pygame.mixer.music.load(audio_file)
+        pygame.mixer.music.play()
+        while pygame.mixer.music.get_busy():
+            time.sleep(1)
+
+def recognize_speech():
+    recognizer = sr.Recognizer()
+    with sr.Microphone() as source:
+        print("Listening...")
+        audio = recognizer.listen(source)
+        try:
+            print("Recognizing...")
+            text = recognizer.recognize_google(audio, language='en-US')
+            print(f"You said: {text}")
+            return text
+        except sr.UnknownValueError:
+            print("Sorry, I did not understand that.")
+            return None
+        except sr.RequestError:
+            print("Sorry, there was an error with the speech recognition service.")
+            return None
+
+def get_response_from_openai(messages):
+    response = client.chat.completions.create(
+        model="gpt-4o",
+        messages=messages,
+        max_tokens=75,
+        temperature=0.5,
+    )
+    return  (response.choices[0].message.content)
+
+def read_text_file(file_path):
+    with open(file_path, 'r') as file:
+        return file.read()
+
+def generate_speech(text, file_path):
+    speech_file_path = Path(file_path).parent / "speech.mp3"
+    with client.audio.speech.with_streaming_response.create(
+        model="tts-1",  # Replace with your actual model ID
+        input=text,
+        voice="alloy"  # Replace with a valid voice for your chosen model
+    ) as response:
+        response.stream_to_file(str(speech_file_path))
+    return str(speech_file_path)
+
+def start_qa_mode(file_content):
+    while True:
+        print("Please ask your question:")
+        question = recognize_speech()
+        if question and question.lower() in ["no", "go to next showcase", "exit"]:
+            break
+        if question:
+            messages = create_messages(question, file_content)
+            answer = get_response_from_openai(messages)
+            print(f"Answer: {answer}")
+            speech_file_path = generate_speech(answer, "speech.mp3")
+            pygame.mixer.init()
+            pygame.mixer.music.load(speech_file_path)
+            pygame.mixer.music.play()
+            while pygame.mixer.music.get_busy():
+             pygame.time.Clock().tick(10)  # Adjust as needed
+            pygame.mixer.music.stop()  # Ensure music playback stops
+            pygame.mixer.quit()  # Release resources
+            os.remove(speech_file_path)
+
+            
+            
+        else:
+            print("Sorry, I didn't get that. Please ask again.")
+
+def main():
+    while True:
+        num = user_input()
+        play_audio(num)
+
+        text_files = {
+            1: "Gigabeeprotect.txt",
+            2: "Hydrosense.txt",
+            3: "Pushtotalk.txt",
+        }
+
+        text_file = text_files.get(num)
+        if text_file:
+            file_content = read_text_file(text_file)
+            start_qa_mode(file_content)
+
+if __name__ == "__main__":
+    main()

Разлика између датотеке није приказан због своје велике величине
+ 112 - 0
General.py


Разлика између датотеке није приказан због своје велике величине
+ 165 - 0
Generalwakeup.py



+ 55 - 0
Gigabeeprotect.txt

@@ -0,0 +1,55 @@
+Overview of GigaBee Protect Solution
+The GigaBee Protect solution is a state-of-the-art IoT-based system designed to protect beehives from theft and environmental hazards. It is a compact, battery-operated device that provides beekeepers with real-time monitoring and alerts, ensuring the safety and security of their bee colonies.
+
+Background and Purpose
+Current methods of protecting beehives face several challenges:
+
+Theft and Vandalism: Beehives are susceptible to theft and vandalism, causing significant losses for beekeepers.
+Environmental Hazards: Beehives can be adversely affected by environmental conditions, including falls or displacement due to weather events.
+Limited Monitoring: Traditional methods offer limited monitoring capabilities, making it difficult for beekeepers to respond promptly to threats.
+The GigaBee Protect solution addresses these issues by providing a reliable, self-sufficient monitoring system that enhances beehive security and environmental resilience.
+
+Technical Details
+Hardware Components:
+Sensors: An accelerometer (LIS2DH12) for detecting movement and environmental changes.
+Microcontroller: A NORDIC nRF9160 microcontroller to manage device operations and connectivity.
+Modem: Utilizes LTE-M and NB-IoT for cellular positioning and communication.
+Battery: Designed to ensure the device remains operational for over 12 months on a single charge.
+Protective Casing: Ensures durability and protection against environmental factors.
+Software:
+Firmware: Developed using efficient programming to ensure low power consumption and robust performance.
+Cloud Platform: Allows remote monitoring and data visualization for beekeepers.
+Connectivity:
+Cellular Technology: Supports LTE-M and NB-IoT for reliable network connections, ensuring the device remains operational even in remote areas.
+Battery Monitoring: Constantly tracks battery levels to ensure continuous operation.
+Network Status Monitoring: Continuously checks network availability and quality to maintain stable connections.
+Solution Description
+The GigaBee Protect device uses advanced IoT technology to provide a stable and reliable monitoring system for beehives. Key features include:
+
+Movement Detection: Alerts beekeepers if the hive is moved or displaced.
+Environmental Monitoring: Provides data on environmental conditions affecting the hive.
+Battery Life: Long-lasting battery designed for over 12 months of operation without recharging.
+Real-time Alerts: Sends immediate notifications to beekeepers in case of theft, displacement, or environmental threats.
+Application Opportunities
+The GigaBee Protect solution is versatile and can be deployed in various settings:
+
+Apiaries: Protects commercial and private beehives from theft and environmental threats.
+Agricultural Areas: Ensures the safety of bee colonies in agricultural settings where pollination is critical.
+Remote Locations: Suitable for beehives located in remote or hard-to-reach areas where regular monitoring is challenging.
+Project Timeline
+06/21: Project inception and initial design phase.
+08/21: Installation of devices in 10 Vodafone beehives.
+08/21 - 11/21: Testing and showcasing at various events, including a press conference and Youtopia event in Cologne.
+Ongoing: Further development, testing, and integration based on feedback.
+Future Development
+The current status of the GigaBee Protect solution is in active use and ongoing development. Future plans include:
+
+Scaling Production: Aiming to increase production by 2023.
+Feature Enhancements: Adding capabilities such as a foulbrood detector, smoke detector, and a beekeeper app for detailed hive parameter visualization.
+Productization: Securing funding and completing patent processes within the next 2 to 4 years.
+Key Points of Contact
+For further information or to express interest in the GigaBee Protect solution, the main point of contact is:
+
+SPOC: Vodafone IoT Future Lab Team
+Conclusion
+The GigaBee Protect solution offers a robust and reliable monitoring system for beekeepers, ensuring the safety and security of their beehives. With continuous monitoring of movement and environmental conditions, the GigaBee Protect device provides peace of mind, significantly enhancing response times and protection against theft and hazards. The battery used here is 4.2V lithium ion battery. 


+ 55 - 0
Hydrosense.txt

@@ -0,0 +1,55 @@
+Overview of HydroSense
+HydroSense is an innovative Internet of Things (IoT) solution developed for the continuous and real-time monitoring of lake water quality. This system employs a buoy equipped with various sensors to measure multiple water parameters, offering a more efficient and timely assessment compared to traditional manual sampling methods.
+
+Background and Purpose
+Traditional methods of analyzing lake water quality involve manual sampling, typically performed only a few times per year. This sporadic testing results in a lack of real-time data, making it difficult to monitor ongoing pollution and environmental changes. The European guideline EG/2000/60 highlights the necessity for automated monitoring solutions, prompting the development of HydroSense to address this need.
+
+Technical Details
+Hardware Components:
+
+Sensors: The buoy is equipped with four sensors:
+One single sensor
+Three sensors integrated into a multiparameter sonde
+Power Supply: The system is powered by solar panels, ensuring a sustainable and continuous power source for the sensors and other components.
+Software and Connectivity:
+
+Dashboard: Data collected by the sensors is transmitted to the IoT Future Lab Dashboard, where it is visualized and analyzed.
+Connectivity: Uses Narrowband IoT (NB-IoT) for reliable and efficient data transmission from the buoy to the dashboard.
+Solution Description
+HydroSense monitors the following water parameters:
+
+Temperature
+pH Level
+Electrical Conductivity
+Redox Potential
+Dissolved Oxygen
+Turbidity
+The buoy’s design, including its case and swim body, is 3D printed to ensure cost-effective production and deployment. The collected data is sent over NB-IoT and displayed on a dashboard, allowing for real-time monitoring and analysis of lake water quality.
+
+Application Opportunities
+The HydroSense solution can be deployed in any lake, making it highly versatile. Potential applications include:
+
+Municipal Use: Cities and communities can use HydroSense to monitor local water bodies.
+Environmental Restoration: Companies involved in environmental restoration projects can leverage this technology to assess and improve lake conditions.
+Project Timeline
+The development and implementation timeline for HydroSense includes:
+
+07 June 2022: Project initiation
+20 July 2022: Arrival of sensors
+05 August 2022: Completion of swim body and case
+05 September 2022: Project completion
+08 September 2022: Implementation with Landesanstalt für Umwelt Baden-Württemberg (LUBW) at Bodensee
+Showcase Features
+The HydroSense showcase is a scaled-down, interactive version designed for demonstrations:
+
+Measured Parameters: The showcase focuses on two key parameters: oxygen and temperature.
+Design: It features a 74-liter tank with synthetic plants, stones, and sand to simulate a natural lake environment. Sensors float in the water and send real-time data via 5G to a dashboard.
+Interactivity: Users can alter water conditions using two buttons:
+One button increases oxygen levels.
+Another button raises the water temperature.
+These changes are accompanied by bubbles and lights for an engaging and educational user experience.
+Future Development and Communication
+HydroSense's showcase is set for further testing and lab establishment by March 2023. Communication strategies include updates through Vodafone's external and internal channels, such as the newsroom and social media platforms, ensuring widespread awareness and engagement with the project.
+
+Conclusion
+HydroSense represents a significant advancement in automated environmental monitoring, providing real-time data and analysis to help manage and protect lake ecosystems. Its innovative design and technology make it a valuable tool for municipalities, environmental organizations, and restoration projects.

+ 53 - 0
IoTConfiguratorSolution.txt

@@ -0,0 +1,53 @@
+Overview of IoT Configurator Solution
+The IoT Configurator Solution is an advanced platform developed by Vodafone, designed to facilitate the creation and deployment of IoT prototypes. This platform leverages IoT hardware cubes, connectivity technologies, and cloud services to enable rapid prototyping and scalable IoT solutions.
+
+Background and Purpose
+Vodafone Innovation Park in Germany has developed the IoT Configurator to address the complexities of IoT ecosystems and the need for end-to-end (E2E) perspectives in launching successful IoT products. The solution allows for the co-creation of innovative prototypes and solutions to unlock the full potential of IoT. It aims to provide customers with a comprehensive understanding of IoT, from the initial idea to the final product, emphasizing individualization and scalability.
+
+Technical Details
+Hardware Components:
+
+Sensors: Various sensors to capture environmental and operational data.
+Microcontroller Module: For processing and managing sensor data.
+Case: Enclosures to protect and house IoT components.
+Energy Source: Battery or other power sources to ensure continuous operation.
+3D Printing: Used for creating custom components and enclosures.
+Software and Data Visualization:
+
+Firmware: Manages sensor data collection and communication.
+Dashboard: Visualizes collected data for real-time monitoring and analysis.
+Web-App Integration: Facilitates user interaction and control over IoT prototypes.
+Connectivity Technologies:
+
+Cellular Connectivity: 2G, 4G, 5G, NB-IoT, and LTE-M for robust data transmission.
+Cloud Server: For data storage, processing, and analytics.
+Solution Description
+The IoT Configurator Solution offers an extensive introduction to IoT, covering the history, technical details, and development process. It provides customers the opportunity to develop their own IoT prototypes using interactive hardware cubes. The solution includes:
+
+E2E IoT Rapid Prototyping: From ideation to deployment, ensuring scalable solutions.
+Real-time Data Visualization: Through a user-friendly dashboard.
+Hands-on Experience: With customizable IoT hardware cubes.
+Application Opportunities
+The IoT Configurator can be utilized in various scenarios, including:
+
+VIP Lab Tours: Demonstrating IoT capabilities to potential clients.
+Customer Contact: Engaging customers with interactive IoT solutions.
+Fairs and Exhibitions: Showcasing IoT innovations and prototypes.
+Project Timeline
+Initial Idea of First IoT Showcase: January 2021
+Pitch and Agency Selection: March 2021
+Hardware Alignment and Exhibit Delivery: February 2021
+Content Preparation and Programming: Throughout 2021 and early 2022
+Estimated Completion of Showcase: February 2023
+Future Development
+The IoT Configurator is an ongoing project with future goals including:
+
+Extended Use Cases: Adapting the solution for broader applications across different industries.
+Productization: Moving from a showcase to a market-ready product.
+Enhanced Connectivity: Exploring advanced connectivity options for better performance.
+Key Points of Contact
+For more information or to express interest in the IoT Configurator Solution, please contact:
+
+Laura Biermann: Vodafone SPOC
+Conclusion
+The IoT Configurator Solution by Vodafone provides a robust platform for developing and deploying IoT prototypes. By combining hardware, software, and connectivity technologies, it offers a comprehensive approach to IoT development, enhancing transparency, analytical capabilities, and operational efficiency. This solution holds potential for widespread application across various industries, driving innovation and improving revenue generation.

Разлика између датотеке није приказан због своје велике величине
+ 194 - 0
Labtourmode.py


BIN
NetworkAnalyser.mp3


+ 37 - 0
Networkanalyzer.txt

@@ -0,0 +1,37 @@
+Overview of Network Analyzer (+)
+The Network Analyzer (+) is a device developed to test and analyze the availability and quality of Low Power Wide Area Networks (LPWANs) such as NB-IoT (Narrowband Internet of Things) and LTE-M. This tool is essential for partners and customers to validate network coverage and performance for various IoT projects.
+
+Background and Purpose
+LPWANs are known for their extensive cell coverage, which makes them suitable for areas with poor network quality. However, testing the availability and quality of LPWANs across multiple planned project locations can be costly and time-consuming. The Network Analyzer (+) addresses this issue by allowing partners and customers to perform these tests independently. This device provides real-time data on signal strength and quality, facilitating a more efficient assessment process.
+
+Technical Details
+The Network Analyzer (+) is equipped with the following technical features:
+
+Reference Signal Received Power (RSRP): Measures the power level received from a reference signal.
+Reference Signal Received Quality (RSRQ): Assesses the quality of the received reference signal.
+Timestamp: Records the time of each measurement.
+Solution Description
+The Network Analyzer (+) functions by transmitting data on signal strength and quality for LPWANs (NB-IoT Cat-M1) and GSM at regular intervals to a server. The collected data can be viewed and analyzed through a dashboard, where it can be filtered by time. Additionally, the device includes an integrated E-paper display that shows the current measured values with a quality indication through bars, providing an immediate visual representation of the network conditions.
+
+Application Opportunities
+The Network Analyzer (+) offers several practical applications:
+
+IoT Investment Validation and Assurance: Ensures that the network infrastructure can support IoT deployments, providing confidence in the investment.
+Building Trust in the Network: Demonstrates the reliability and quality of the network to potential users.
+Self-Testing Solution: Allows service providers to test network coverage and quality independently without requiring extensive external support.
+Project Timeline
+The production timeline for the Network Analyzer (+) is as follows:
+
+April 2023: 500 devices produced
+End of May 2023: Full rollout planned
+Status and Reusability
+Current Status: In production
+Location: Germany
+Type: Minimum Viable Product (MVP)
+Reusability: Yes, the device can be reused for multiple projects and locations.
+Key Points of Contact
+For more information or to express interest in the Network Analyzer (+), the main point of contact is:
+
+SPOC: Tim Schaerfke
+Conclusion
+The Network Analyzer (+) is a crucial tool for validating and ensuring the quality of LPWAN networks. It empowers partners and customers to conduct their own network assessments, thereby saving time and reducing costs. By providing real-time data and an easy-to-use interface, the Network Analyzer (+) enhances the ability to plan and deploy IoT solutions with confidence.


+ 52 - 0
Pushtotalk.txt

@@ -0,0 +1,52 @@
+Overview of Push To Talk (PTT) Solution
+The Push To Talk (PTT) solution is an emergency call system designed for office spaces. It is a wall-mounted, battery-operated unit that enables quick and reliable communication with emergency services such as the police, fire service, or maintenance department.
+
+Background and Purpose
+Current emergency calling devices in office environments face several limitations:
+
+They are dependent on power supply and Voice over Internet Protocol (VoIP) systems, which can fail during power outages or network issues.
+Companies often lack visibility into the operational status of these devices, which can result in employees being at high risk during emergencies if the devices are not functioning properly.
+The PTT solution addresses these shortcomings by offering a more reliable, self-sufficient emergency communication system.
+
+Technical Details
+Hardware Components:
+
+Push Buttons: Three buttons designated for different emergency services.
+Connectivity Board: A custom-developed board that facilitates communication.
+Modem: A Quectel modem for network connectivity.
+Battery: Ensures the device remains operational even during power outages.
+Microphone and Speaker: For clear audio communication.
+Microcontroller: Manages the device’s operations and connectivity.
+Software:
+
+Developed using React and C++ to ensure robust performance.
+Connectivity:
+
+Supports 4G and 2G Circuit Switched Fallback (CSFB) for reliable network connections.
+Solution Description
+The PTT device uses cellular technology to ensure a stable and reliable connection, even when traditional power and network systems fail. Key features include:
+
+Battery Monitoring: The device constantly monitors its battery level to ensure it is always ready for use.
+Network Status Monitoring: Continuously checks network availability and quality.
+Emergency Buttons: Three push buttons allow users to call specific emergency services directly.
+Modem Functionality: The modem actively searches for the best available connection to ensure successful communication.
+Application Opportunities
+The PTT solution is highly versatile and can be deployed in various settings:
+
+Office Buildings: Ensures employee safety on every floor by providing immediate access to emergency services.
+Public Places: Useful in locations that require constant maintenance and where quick communication with emergency services is crucial.
+Hazardous Environments: Suitable for areas where mobile phones are not permitted, but emergency communication is necessary.
+Project Timeline
+04/22: Start of the project, with the initial design of the audio circuit.
+05/22 - 08/22: Testing of the first Printed Circuit Board (PCB) for audio functionality and official review of the audio circuit from Quectel.
+09/22: Designing the second version of the audio circuit, integrating it, and completing the 3D design.
+Ongoing Tasks: Further testing and integration based on initial results and feedback.
+Future Development
+The current status of the PTT solution is a Proof of Concept (PoC), and efforts are underway to secure funding for productization. The estimated timeline for patent completion is 2 to 4 years.
+
+Key Points of Contact
+For further information or to express interest in the PTT solution, the main point of contact is:
+
+SPOC: TETI Tim Schaerfke
+Conclusion
+The Push To Talk solution provides a robust and reliable emergency communication system for office spaces and other environments where traditional methods may fail. With continuous monitoring of battery levels and network status, the PTT device ensures that help is always just a button press away, significantly enhancing safety and response times during emergencies.

+ 150 - 0
Qrcode.py

@@ -0,0 +1,150 @@
+import pygame
+import speech_recognition as sr
+import openai
+from pathlib import Path
+import time
+import os
+import cv2
+from pyzbar import pyzbar
+
+# Initialize OpenAI API key
+api_key = 'sk-proj-wwWfffaxim1Qt13uqzS6755567S0xjT3BlbkFJK0rZvx78AJiWG3Ot7d3S'
+client = openai.OpenAI(api_key=api_key)
+
+def create_messages(question, file_content):
+    return [
+        {"role": "system", "content": "You are a helpful assistant who explains and answers about IoT use cases in Vodafone."},
+        {"role": "user", "content": f"{file_content}\n\nQ: {question}\nA:"}
+    ]
+
+def play_audio(num):
+    audio_files = {
+        1: "Gigabee.mp3",
+        2: "hydrosense.mp3",
+        3: "Pushtotalk.mp3",
+    }
+    
+    audio_file = audio_files.get(num)
+    
+    if audio_file:
+        pygame.mixer.init()
+        pygame.mixer.music.load(audio_file)
+        pygame.mixer.music.play()
+        while pygame.mixer.music.get_busy():
+            time.sleep(1)
+
+def recognize_speech():
+    recognizer = sr.Recognizer()
+    with sr.Microphone() as source:
+        print("Listening...")
+        audio = recognizer.listen(source)
+        try:
+            print("Recognizing...")
+            text = recognizer.recognize_google(audio, language='en-US')
+            print(f"You said: {text}")
+            return text
+        except sr.UnknownValueError:
+            print("Sorry, I did not understand that.")
+            return None
+        except sr.RequestError:
+            print("Sorry, there was an error with the speech recognition service.")
+            return None
+
+def get_response_from_openai(messages):
+    response = client.Completion.create(
+        model="gpt-3.5-turbo",
+        messages=messages,
+        max_tokens=150,
+        temperature=0.5,
+    )
+    return response.choices[0].message['content']
+
+def read_text_file(file_path):
+    with open(file_path, 'r') as file:
+        return file.read()
+
+def generate_speech(text, file_path):
+    speech_file_path = Path(file_path).parent / "speech.mp3"
+    if speech_file_path.exists():
+        os.remove(speech_file_path)
+    with client.Audio.create(
+        model="tts-1",
+        input=text,
+        voice="alloy"
+    ) as response:
+        response.stream_to_file(str(speech_file_path))
+    return str(speech_file_path)
+
+def start_qa_mode(file_content):
+    while True:
+        print("Please ask your question:")
+        question = recognize_speech()
+        
+        if question and question.lower() in ["no", "next showcase", "exit"]:
+            break
+        
+        if question:
+            messages = create_messages(question, file_content)
+            answer = get_response_from_openai(messages)
+            print(f"Answer: {answer}")
+            
+            speech_file_path = generate_speech(answer, "speech.mp3")
+            pygame.mixer.init()
+            pygame.mixer.music.load(speech_file_path)
+            pygame.mixer.music.play()
+            while pygame.mixer.music.get_busy():
+                pygame.time.Clock().tick(10)  # Adjust as needed
+            pygame.mixer.music.stop()  # Ensure music playback stops
+            pygame.mixer.quit()  # Release resources
+            os.remove(speech_file_path)
+
+        else:
+            print("Sorry, I didn't get that. Please ask again.")
+
+def scan_qr_code():
+    cap = cv2.VideoCapture(0)
+    while True:
+        ret, frame = cap.read()
+        if not ret:
+            continue
+
+        decoded_objects = pyzbar.decode(frame)
+        for obj in decoded_objects:
+            qr_data = obj.data.decode('utf-8')
+            cap.release()
+            cv2.destroyAllWindows()
+            return qr_data
+
+        cv2.imshow('QR Code Scanner', frame)
+        if cv2.waitKey(1) & 0xFF == ord('q'):
+            break
+
+    cap.release()
+    cv2.destroyAllWindows()
+    return None
+
+def main():
+    text_files = {
+        "1": "Gigabeeprotect.txt",
+        "2": "Hydrosense.txt",
+        "3": "Pushtotalk.txt",
+    }
+
+    audio_files = {
+        "1": "Gigabee.mp3",
+        "2": "hydrosense.mp3",
+        "3": "Pushtotalk.mp3",
+    }
+
+    while True:
+        print("Scan a QR code...")
+        qr_data = scan_qr_code()
+        if qr_data in text_files:
+            play_audio(int(qr_data))
+            file_content = read_text_file(text_files[qr_data])
+            start_qa_mode(file_content)
+        else:
+            print("Invalid QR code. Please try again.")
+
+if __name__ == "__main__":
+    main()


+ 50 - 0
RFIDautomationenabler.txt

@@ -0,0 +1,50 @@
+Overview of RFID Automation Enabler Solution
+The RFID Automation Enabler is an advanced IoT-based device designed to track samples, products, and commodities at production sites. It utilizes RFID technology combined with LTE-M connectivity to provide real-time tracking and data forwarding to a cloud instance, enhancing automation and inventory management for various industries.
+
+Background and Purpose
+Prezero, a German environmental services provider, faces challenges in tracking the ingredients of processed trash cubes for resale purposes. Traditional tracking methods are inefficient due to the attenuation of RFID signals through trash piles. The RFID Automation Enabler addresses these challenges by offering:
+
+Enhanced Tracking Accuracy: Mitigates signal attenuation issues.
+Real-time Data Transmission: Provides up-to-date tracking information.
+Improved Inventory Management: Helps streamline processes and optimize resources.
+Technical Details
+Hardware Components:
+RFID Reader: Thinkmagic M6E-NANO for reading RFID tags.
+Modem: BG95-M3 Modem (LTE-M) for reliable network connectivity.
+Battery: Lithium-ion battery ensures continuous operation.
+Extendable Rods: Allows the device to be positioned at optimal heights for accurate readings.
+Software:
+Firmware: Manages RFID reading and data transmission.
+Dashboard: Visualizes the collected data, showing the presence of Electronic Product Codes (EPCs).
+Connectivity
+LTE-M: Ensures robust data transmission to the cloud, even in challenging environments.
+Future Upgrade: Potential upgrade to RedCap devices for enhanced performance.
+Solution Description
+The RFID Automation Enabler reads all EPCs of nearby RFID tags and forwards this data through LTE-M to a backend system. A user-friendly dashboard visualizes the data, showing real-time status of tracked items. Key features include:
+
+Reliable RFID Reading: Overcomes signal attenuation issues in dense environments.
+Real-time Data Forwarding: Ensures immediate availability of tracking information.
+Ease of Deployment: Extendable rods facilitate easy positioning and testing on-site.
+Application Opportunities
+The RFID Automation Enabler can be deployed across various industries and environments:
+
+Production Sites: Tracks commodities and products within large manufacturing areas.
+Waste Management: Monitors the composition of processed trash cubes for resale.
+Logistics and Supply Chain: Enhances visibility and efficiency in inventory management.
+Project Timeline
+Project Start: Initial design and development phase.
+On-site Presentation: Demonstration planned to address signal attenuation concerns.
+Prototype Completion: Estimated by 1st February 2024.
+Future Development
+Currently a Proof of Concept (PoC), the RFID Automation Enabler is under development with plans for further testing and refinement. Future goals include:
+
+Improved RFID Range: Enhancing the device's ability to read tags through dense materials.
+Productization: Moving from PoC to a market-ready product.
+Extended Use Cases: Adapting the solution for broader applications across different industries.
+Key Points of Contact
+For further information or to express interest in the RFID Automation Enabler solution, the main points of contact are:
+
+Tim Schaerfke
+Laura Biermann
+Conclusion
+The RFID Automation Enabler provides a robust and efficient solution for tracking items in challenging environments. By combining RFID technology with LTE-M connectivity, it offers real-time tracking and data visualization, significantly improving inventory management and operational efficiency. This solution holds potential for widespread application across various industries, enhancing transparency, analytical capabilities, and revenue generation.


+ 71 - 0
button.py

@@ -0,0 +1,71 @@
+import RPi.GPIO as GPIO
+import time
+import subprocess
+import logging
+import os
+import io
+import soundfile as sf
+import sounddevice as sd
+import pygame
+pygame.mixer.init()
+
+# Setup logging
+logging.basicConfig(filename='/home/pi/script_log.txt', level=logging.INFO, format='%(asctime)s - %(message)s')
+
+# Setup GPIO pins
+GPIO.setmode(GPIO.BCM)
+button1_pin = 17  # GPIO pin for Button 1
+button2_pin = 27  # GPIO pin for Button 2
+led_pin = 22      # GPIO pin for LED
+GPIO.setup(button1_pin, GPIO.IN, pull_up_down=GPIO.PUD_UP)
+GPIO.setup(button2_pin, GPIO.IN, pull_up_down=GPIO.PUD_UP)
+GPIO.setup(led_pin, GPIO.OUT)
+GPIO.output(led_pin, GPIO.LOW)  # Turn off LED initially
+
+# Global process variable
+current_process = None
+pygame.mixer.music.load('Welcome.mp3')
+pygame.mixer.music.play()
+
+def run_script(script_path):
+    global current_process
+    if current_process is not None:
+        logging.info("Terminating existing process...")
+        current_process.terminate()
+        time.sleep(2)  # Give it some time to terminate
+        if current_process.poll() is None:  # If it's still running
+            logging.info("Force killing process...")
+            current_process.kill()
+        current_process.wait()
+        current_process = None
+        GPIO.output(led_pin, GPIO.LOW)  # Turn off LED
+
+    logging.info(f"Starting {script_path}")
+    current_process = subprocess.Popen(['python3', script_path])
+    GPIO.output(led_pin, GPIO.HIGH)  # Turn on LED
+
+try:
+    while True:
+        button1_state = GPIO.input(button1_pin)
+        button2_state = GPIO.input(button2_pin)
+
+        if button1_state == GPIO.LOW:  # Button 1 pressed
+            run_script('/home/pi/Desktop/MasterFiles/Theproject/Labtourmode.py')
+            pygame.mixer.music.load('mode1.mp3')
+            pygame.mixer.music.play()
+            time.sleep(0.5)  # Debounce delay
+
+        if button2_state == GPIO.LOW:  # Button 2 pressed
+            run_script('/home/pi/Desktop/MasterFiles/Theproject/generalnew.py')
+            pygame.mixer.music.load('mode2.mp3')
+            pygame.mixer.music.play()
+            time.sleep(0.5)  # Debounce delay
+
+        time.sleep(0.1)  # Small delay to debounce the buttons
+
+finally:
+    # Cleanup GPIO pins on exit
+    if current_process is not None:
+        current_process.terminate()
+    GPIO.cleanup()
+    logging.info("Button manager script terminated.")

+ 109 - 0
checking.py

@@ -0,0 +1,109 @@
+import pygame.mixer as mixer
+import speech_recognition as sr
+import openai
+from pathlib import Path
+from openai import OpenAI
+import os
+import pygame
+import mixer
+
+# Set your OpenAI API key here
+api_key = 'sk-proj-wwWaxim1Qt13243454353uqzSS0xjT3BlbkFJK0rZvx78AJiWG3Ot7d3S'
+client = OpenAI(api_key=api_key)
+
+# Initialize pygame mixer
+pygame.mixer.init()
+def read_text_file(file_path):
+    with open(file_path, 'r', encoding='utf-8') as file:
+        return file.read()
+
+# Function to recognize speech from the microphone
+def recognize_speech():
+    recognizer = sr.Recognizer()
+    with sr.Microphone() as source:
+        print("Listening...")
+        audio = recognizer.listen(source)
+        try:
+            print("Recognizing...")
+            text = recognizer.recognize_google(audio, language='en-US')
+            print(f"You said: {text}")
+            return text
+        except sr.UnknownValueError:
+            print("Sorry, I did not understand that.")
+            return None
+        except sr.RequestError:
+            print("Sorry, there was an error with the speech recognition service.")
+            return None
+
+# Function to get a response from OpenAI
+
+def create_messages(question, file_content):
+    return [
+        {"role": "system", "content": "You are a helpful assistant who explains and answers about IoT use cases in Vodafone. Do not say any calculations. Directly say the result"},
+        {"role": "user", "content": f"{file_content}\n\nQ: {question}\nA:"}
+    ]
+    
+def get_response_from_openai(messages):
+    response = client.chat.completions.create(
+        model="gpt-3.5-turbo",
+        messages=messages,
+        max_tokens=150,
+        temperature=0.5,
+    )
+    return  (response.choices[0].message.content)
+
+# Function to generate speech using OpenAI TTS
+def generate_speech(text, file_path):
+    speech_file_path = Path(file_path).parent / "speech.mp3"
+    with client.audio.speech.with_streaming_response.create(
+        model="tts-1",  # Replace with your actual model ID
+        input=text,
+        voice="alloy"  # Replace with a valid voice for your chosen model
+    ) as response:
+        response.stream_to_file(str(speech_file_path))
+    return str(speech_file_path)
+
+# Main function to handle user query
+def chatbot(question, file_path):
+    file_content = read_text_file(file_path)
+    messages = create_messages(question, file_content)
+    answer = get_response_from_openai(messages)
+    return answer
+
+if __name__ == "__main__":
+    file_path = 'device_data.txt'  # Path to your text file
+    while True:
+        print("Press Enter to ask a question, or type 'exit' or 'quit' to stop.")
+        user_input = input("Type 'speak' to ask a question using your voice: ").strip().lower()
+        
+        if user_input in ['exit', 'quit']:
+            break
+        
+        if user_input == 'speak':
+            question = recognize_speech()
+            if question:
+                answer = chatbot(question, file_path)
+                print("Answer:", answer)
+                speech_file = generate_speech(answer, file_path)
+                pygame.mixer.init()
+                pygame.mixer.music.load(speech_file)
+                pygame.mixer.music.play()
+                while pygame.mixer.music.get_busy():
+                 pygame.time.Clock().tick(10)  # Adjust as needed
+                pygame.mixer.music.stop()  # Ensure music playback stops
+                pygame.mixer.quit()  # Release resources
+                os.remove(speech_file)
+
+        else:
+            question = user_input
+            answer = chatbot(question, file_path)
+            print("Answer:", answer)
+            speech_file = generate_speech(answer, file_path)
+            pygame.mixer.init()
+            pygame.mixer.music.load(speech_file)
+            pygame.mixer.music.play()
+            while pygame.mixer.music.get_busy():
+              pygame.time.Clock().tick(10)  # Adjust as needed
+            pygame.mixer.music.stop()  # Ensure music playback stops
+            pygame.mixer.quit()  # Release resources
+            os.remove(speech_file)

Разлика између датотеке није приказан због своје велике величине
+ 40 - 0
device_data.csv


+ 952 - 0
device_data.txt

@@ -0,0 +1,952 @@
+The battery capacity is 4.2V, and minimum is 3V.
+
+ID: 373537
+IMEI: 352656101015025
+Timestamp: 2023-07-20T23:23:37+00:00
+Project: alwa
+Location: None
+Latest: True
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3088
+
+ID: 373536
+IMEI: 352656101015025
+Timestamp: 2023-07-20T23:18:22+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3083
+
+ID: 373535
+IMEI: 352656101015025
+Timestamp: 2023-07-20T23:13:08+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3088
+
+ID: 373534
+IMEI: 352656101015025
+Timestamp: 2023-07-20T23:07:52+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3083
+
+ID: 373533
+IMEI: 352656101015025
+Timestamp: 2023-07-20T23:02:37+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3088
+
+ID: 373532
+IMEI: 352656101015025
+Timestamp: 2023-07-20T22:57:22+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3088
+
+ID: 373531
+IMEI: 352656101015025
+Timestamp: 2023-07-20T22:52:07+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3093
+
+ID: 373530
+IMEI: 352656101015025
+Timestamp: 2023-07-20T22:46:52+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3098
+
+ID: 373529
+IMEI: 352656101015025
+Timestamp: 2023-07-20T22:41:37+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3098
+
+ID: 373527
+IMEI: 352656101015025
+Timestamp: 2023-07-20T22:36:21+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3098
+
+ID: 373526
+IMEI: 352656101015025
+Timestamp: 2023-07-20T22:31:06+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3088
+
+ID: 373525
+IMEI: 352656101015025
+Timestamp: 2023-07-20T22:25:51+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3103
+
+ID: 373524
+IMEI: 352656101015025
+Timestamp: 2023-07-20T22:20:36+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3093
+
+ID: 373523
+IMEI: 352656101015025
+Timestamp: 2023-07-20T22:15:21+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3093
+
+ID: 373522
+IMEI: 352656101015025
+Timestamp: 2023-07-20T22:10:07+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3098
+
+ID: 373521
+IMEI: 352656101015025
+Timestamp: 2023-07-20T22:04:51+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3103
+
+ID: 373520
+IMEI: 352656101015025
+Timestamp: 2023-07-20T21:59:36+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3098
+
+ID: 373519
+IMEI: 352656101015025
+Timestamp: 2023-07-20T21:54:21+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3093
+
+ID: 373518
+IMEI: 352656101015025
+Timestamp: 2023-07-20T21:49:06+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3103
+
+ID: 373517
+IMEI: 352656101015025
+Timestamp: 2023-07-20T21:43:51+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3108
+
+ID: 373516
+IMEI: 352656101015025
+Timestamp: 2023-07-20T21:38:36+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3114
+
+ID: 373515
+IMEI: 352656101015025
+Timestamp: 2023-07-20T21:33:21+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3108
+
+ID: 373514
+IMEI: 352656101015025
+Timestamp: 2023-07-20T21:28:05+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3103
+
+ID: 373513
+IMEI: 352656101015025
+Timestamp: 2023-07-20T21:22:51+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3108
+
+ID: 373512
+IMEI: 352656101015025
+Timestamp: 2023-07-20T21:17:35+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3103
+
+ID: 373510
+IMEI: 352656101015025
+Timestamp: 2023-07-20T21:12:20+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3103
+
+ID: 373509
+IMEI: 352656101015025
+Timestamp: 2023-07-20T21:07:05+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3114
+
+ID: 373508
+IMEI: 352656101015025
+Timestamp: 2023-07-20T21:01:50+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3103
+
+ID: 373507
+IMEI: 352656101015025
+Timestamp: 2023-07-20T20:56:37+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3114
+
+ID: 373506
+IMEI: 352656101015025
+Timestamp: 2023-07-20T20:51:20+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3103
+
+ID: 373505
+IMEI: 352656101015025
+Timestamp: 2023-07-20T20:46:05+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3108
+
+ID: 373504
+IMEI: 352656101015025
+Timestamp: 2023-07-20T20:40:50+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3108
+
+ID: 373503
+IMEI: 352656101015025
+Timestamp: 2023-07-20T20:35:35+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3114
+
+ID: 373502
+IMEI: 352656101015025
+Timestamp: 2023-07-20T20:30:20+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3119
+
+ID: 373501
+IMEI: 352656101015025
+Timestamp: 2023-07-20T20:25:05+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3114
+
+ID: 373500
+IMEI: 352656101015025
+Timestamp: 2023-07-20T20:19:49+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3108
+
+ID: 373499
+IMEI: 352656101015025
+Timestamp: 2023-07-20T20:14:34+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3114
+
+ID: 373498
+IMEI: 352656101015025
+Timestamp: 2023-07-20T20:09:20+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3124
+
+ID: 373497
+IMEI: 352656101015025
+Timestamp: 2023-07-20T20:04:04+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3119
+
+ID: 373496
+IMEI: 352656101015025
+Timestamp: 2023-07-20T19:58:49+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3114
+
+ID: 373495
+IMEI: 352656101015025
+Timestamp: 2023-07-20T19:53:34+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3124
+
+ID: 373494
+IMEI: 352656101015025
+Timestamp: 2023-07-20T19:48:19+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3124
+
+ID: 373493
+IMEI: 352656101015025
+Timestamp: 2023-07-20T19:43:04+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3119
+
+ID: 373492
+IMEI: 352656101015025
+Timestamp: 2023-07-20T19:37:49+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3124
+
+ID: 373491
+IMEI: 352656101015025
+Timestamp: 2023-07-20T19:32:34+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3129
+
+ID: 373490
+IMEI: 352656101015025
+Timestamp: 2023-07-20T19:27:19+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3124
+
+ID: 373489
+IMEI: 352656101015025
+Timestamp: 2023-07-20T19:22:04+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3124
+
+ID: 373488
+IMEI: 352656101015025
+Timestamp: 2023-07-20T19:16:48+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3129
+
+ID: 373486
+IMEI: 352656101015025
+Timestamp: 2023-07-20T19:11:33+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3129
+
+ID: 373485
+IMEI: 352656101015025
+Timestamp: 2023-07-20T19:06:18+00:00
+Project: alwa
+Location: None
+Latest: False
+Network:
+  TA: 0
+  CGI: 11730433
+  LAC: 44112
+  MCC: 262
+  MNC: 2
+  RSRP: 0
+  RSRQ: 0
+  EARFCN: 0
+Data:
+  Usage: None
+  Battery Voltage: 3134
+

Разлика између датотеке није приказан због своје велике величине
+ 136 - 0
generalnew.py





+ 127 - 0
hm.py

@@ -0,0 +1,127 @@
+import pygame
+import speech_recognition as sr
+from openai import OpenAI
+from pathlib import Path
+import time
+import os
+import io
+import soundfile as sf
+import sounddevice as sd
+
+# Initialize OpenAI API key
+api_key = 'sk-proj-wwWaxim1Qt13uqzSS0xjT3BlbkFJK0rZvx78AJiWG3Ot7d3S'
+client = OpenAI(api_key=api_key)
+
+def create_messages(question, file_content):
+    return [
+        {"role": "system", "content": "You are a helpful assistant who explains and answers about IoT use cases in Vodafone."},
+        {"role": "user", "content": f"{file_content}\n\nQ: {question}\nA:"}
+    ]
+
+def play_audio(num):
+    audio_files = {
+        1: "speech1.mp3",
+        2: "hydrosense.mp3",
+        3: "Pushtotalk.mp3",
+    }
+    
+    audio_file = audio_files.get(num)
+    
+    if audio_file:
+        pygame.mixer.init()
+        pygame.mixer.music.load(audio_file)
+        pygame.mixer.music.play()
+        while pygame.mixer.music.get_busy():
+            time.sleep(1)
+
+def recognize_speech():
+    recognizer = sr.Recognizer()
+    with sr.Microphone() as source:
+        print("Listening...")
+        audio = recognizer.listen(source)
+        try:
+            print("Recognizing...")
+            text = recognizer.recognize_google(audio, language='en-US')
+            print(f"You said: {text}")
+            return text
+        except sr.UnknownValueError:
+            print("Sorry, I did not understand that.")
+            return None
+        except sr.RequestError:
+            print("Sorry, there was an error with the speech recognition service.")
+            return None
+
+def get_response_from_openai(messages):
+    response = client.chat.completions.create(
+        model="gpt-3.5-turbo",
+        messages=messages,
+        max_tokens=150,
+        temperature=0.5,
+    )
+    return  (response.choices[0].message.content)
+
+def read_text_file(file_path):
+    with open(file_path, 'r') as file:
+        return file.read()
+
+def generate_speech(text):
+    spoken_response = client.audio.speech.create(
+        model="tts-1",
+        voice="alloy",
+        response_format="opus",
+        input=text
+    )
+
+    buffer = io.BytesIO()
+    for chunk in spoken_response.iter_bytes(chunk_size=4096):
+        buffer.write(chunk)
+    buffer.seek(0)
+
+    with sf.SoundFile(buffer, 'r') as sound_file:
+        data = sound_file.read(dtype='int16')
+        sd.play(data, sound_file.samplerate)
+        sd.wait()
+
+def start_qa_mode(file_content):
+    while True:
+        print("Please ask your question:")
+        question = recognize_speech()
+
+        if question and question.lower() in ["no", "go to next showcase", "exit"]:
+            break
+
+        if question:
+            messages = create_messages(question, file_content)
+            answer = get_response_from_openai(messages)
+            print(f"Answer: {answer}")
+
+            generate_speech(answer)
+        else:
+            print("Sorry, I didn't get that. Please ask again.")
+
+def user_input():
+    while True:
+        try:
+            num = int(input("Enter a number from 1 to 3: "))
+            if 1 <= num <= 3:
+                return num
+            else:
+                print("Invalid input. Please enter a number between 1 and 3.")
+        except ValueError:
+            print("Invalid input. Please enter a valid integer.")
+
+def main():
+    text_files = {
+        1: "Gigabeeprotect.txt",
+        2: "Hydrosense.txt",
+        3: "Pushtotalk.txt",
+    }
+
+    while True:
+        num = user_input()
+        play_audio(num)
+        file_content = read_text_file(text_files[num])
+        start_qa_mode(file_content)
+
+if __name__ == "__main__":
+    main()



Разлика између датотеке није приказан због своје велике величине
+ 172 - 0
latestfetch.py


+ 28 - 0
ml.py

@@ -0,0 +1,28 @@
+# train_model.py
+import pandas as pd
+from sklearn.model_selection import train_test_split
+from sklearn.ensemble import RandomForestClassifier
+from sklearn.metrics import accuracy_score
+import joblib
+
+# Load the collected data from CSV
+data = pd.read_csv('wifi_signals.csv')
+
+# Features (SignalStrength) and target (Location)
+X = data[['SignalStrength']]
+y = data['Location']
+
+# Split the data into training and test sets (80% train, 20% test)
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
+
+# Train a Random Forest Classifier
+model = RandomForestClassifier()
+model.fit(X_train, y_train)
+
+# Evaluate the model
+y_pred = model.predict(X_test)
+accuracy = accuracy_score(y_test, y_pred)
+print(f'Model Accuracy: {accuracy * 100:.2f}%')
+
+# Save the trained model to a file
+joblib.dump(model, 'wifi_model.pkl')



Разлика између датотеке није приказан због своје велике величине
+ 126 - 0
nao.py


+ 187 - 0
newcache.py

@@ -0,0 +1,187 @@
+import pygame
+import speech_recognition as sr
+from openai import OpenAI
+from pathlib import Path
+import os
+import io
+import soundfile as sf
+import sounddevice as sd
+import random
+import csv
+
+# Initialize Pygame mixer for audio playback
+pygame.mixer.init()
+
+# Set your OpenAI API key here
+api_key = 'sk-proj-wwWaxim1Qt13uqzSS6660xjT3BlbkFJK0rZvx78AJiWG3Ot7d3S'  # Replace with your actual OpenAI API key
+client = OpenAI(api_key=api_key)
+
+# Cache for content and previous responses
+content_cache = {}
+response_cache = {}
+
+# Function to read text file
+def read_text_file(file_path):
+    try:
+        with open(file_path, 'r', encoding='utf-8') as file:
+            return file.read()
+    except Exception as e:
+        print(f"Error reading text file: {e}")
+        return ""
+
+# Function to read CSV file
+def read_csv_file(file_path):
+    content = ""
+    try:
+        with open(file_path, mode='r', encoding='utf-8') as file:
+            reader = csv.reader(file)
+            for row in reader:
+                content += ' '.join(row) + ' '
+    except Exception as e:
+        print(f"Error reading CSV file: {e}")
+    return content
+
+# Function to recognize speech from the microphone
+def recognize_speech():
+    recognizer = sr.Recognizer()
+    with sr.Microphone() as source:
+        print("Listening...")
+        audio = recognizer.listen(source)
+        try:
+            print("Recognizing...")
+            text = recognizer.recognize_google(audio, language='en-US')
+            # Play a random audio response
+            random_audio = random.choice(["ty.mp3", "th.mp3", "sure.mp3", "sure1.mp3"])
+            pygame.mixer.music.load(random_audio)
+            pygame.mixer.music.play()
+            print(f"You said: {text}")
+            return text
+        except sr.UnknownValueError:
+            print("Sorry, I did not understand that.")
+            return None
+        except sr.RequestError:
+            print("Sorry, there was an error with the speech recognition service.")
+            return None
+
+# Function to create messages for OpenAI API
+def create_messages(question, file_content):
+    return [
+        {"role": "system", "content": "Your name is Futurebot..."},
+        {"role": "user", "content": f"{file_content}\n\nQ: {question}\nA:"}
+    ]
+
+# Function to get a response from OpenAI
+def get_response_from_openai(messages):
+    stream = client.chat.completions.create(
+        model="gpt-3.5-turbo",
+        max_tokens=150,
+        temperature=0.5,
+        messages=messages,
+        stream=True,
+    )
+    for chunk in stream:
+        yield chunk
+
+# Function to generate and play speech in chunks
+def generate_speech(text):
+    if text.strip():
+        spoken_response = client.audio.speech.create(
+            model="tts-1",
+            voice="alloy",
+            input=text
+        )
+
+        buffer = io.BytesIO()
+        for chunk in spoken_response.iter_bytes(chunk_size=4096):
+            buffer.write(chunk)
+        buffer.seek(0)
+
+        with sf.SoundFile(buffer, 'r') as sound_file:
+            data = sound_file.read(dtype='int16')
+            sd.play(data, sound_file.samplerate)
+            sd.wait()
+
+# Function to load files once at startup
+def load_files_once(text_file_path, csv_file_path):
+    text_content = read_text_file(text_file_path)
+    csv_content = read_csv_file(csv_file_path)
+    return text_content + ' ' + csv_content
+
+# Function to reload files if they have been modified since the last load
+def load_files_if_updated(text_file_path, csv_file_path, last_mod_time):
+    text_mod_time = os.path.getmtime(text_file_path)
+    csv_mod_time = os.path.getmtime(csv_file_path)
+    if text_mod_time > last_mod_time['text'] or csv_mod_time > last_mod_time['csv']:
+        print("Files updated, reloading...")
+        last_mod_time['text'] = text_mod_time
+        last_mod_time['csv'] = csv_mod_time
+        return read_text_file(text_file_path) + ' ' + read_csv_file(csv_file_path)
+    return None  # No update
+
+# Function to cache and retrieve responses
+def get_cached_response(question, combined_content):
+    # Generate a unique key based on question and content
+    cache_key = (question, combined_content)
+
+    # Check if the response is already cached
+    if cache_key in response_cache:
+        print("Using cached response")
+        return response_cache[cache_key]
+
+    # Otherwise, generate a new response
+    messages = create_messages(question, combined_content)
+    response_generator = get_response_from_openai(messages)
+
+    accumulated_response = ""
+
+    # Process the response chunk by chunk
+    for response_chunk in response_generator:
+        chunk_content = response_chunk.choices[0].delta.content if response_chunk.choices else None
+
+        # Check if chunk_content is not None
+        if chunk_content:
+            accumulated_response += chunk_content
+            
+            # Check for sentence end or length
+            if '.' in chunk_content or len(accumulated_response) > 600:
+                print(accumulated_response, end="", flush=True)
+                generate_speech(accumulated_response)
+                accumulated_response = ""  # Reset accumulated response for the next chunk
+
+    if accumulated_response:  # Generate speech for any remaining text
+        print(accumulated_response, end="", flush=True)
+        generate_speech(accumulated_response)
+
+    # Cache the response for future use
+    response_cache[cache_key] = accumulated_response
+    return accumulated_response
+
+# Main function to handle user query
+def chatbot(question, combined_content):
+    response = get_cached_response(question, combined_content)
+    print("Answer: ", response)
+    generate_speech(response)
+
+if __name__ == "__main__":
+    text_file_path = 'Allinone.txt'  # Path to your text file
+    csv_file_path = 'device_data.csv'  # Path to your CSV file
+
+    # Load the files once at startup
+    last_mod_time = {'text': 0, 'csv': 0}
+    combined_content = load_files_once(text_file_path, csv_file_path)
+    last_mod_time['text'] = os.path.getmtime(text_file_path)
+    last_mod_time['csv'] = os.path.getmtime(csv_file_path)
+
+    # Main loop
+    while True:
+        question = recognize_speech()
+        if question:
+            # Check if files have been updated, and reload if necessary
+            updated_content = load_files_if_updated(text_file_path, csv_file_path, last_mod_time)
+            if updated_content:
+                combined_content = updated_content
+
+            # Call the chatbot with the current content
+            chatbot(question, combined_content)
+        else:
+            print("Sorry, I didn't get that. Please ask again.")

+ 116 - 0
newfetch.py

@@ -0,0 +1,116 @@
+from msal import ConfidentialClientApplication
+import requests
+import csv
+from typing import List, Dict, Any, Union
+
+# Define your Azure AD and API configurations
+TENANT_ID = '82360567-02b6-4a45451f-a4z6-910a811b8131'
+CLIENT_ID = 'a3d88cc7-8889-42356548-bd3b-f5689372f07084'
+CLIENT_SECRET = 'ibh8Q~LVSzZeeewdeWF546456UHNSoUBP.lz8GBmenXGJTlnbdU'
+SCOPE = ['api://a3d88cc7-0789-4238-bd3hb-456456f89667372f07084/.default']
+
+# Define API endpoint configurations
+data_endpoint_config = {
+    'projects': {
+        'fields': 'projects?select=id,name,route,imeis,subline,ws10Icon,routeSecondLayer',
+        'order': '&order=name.asc&limit=10',
+    },
+    'latestDeviceData': {
+        'fields': 'devices',
+        'order': '&order=name.desc',
+    },
+    'data': {
+        'fields': 'data?select=id,version,imei,timestamp,network,data,project,location,latest',
+        'order': '&order=timestamp.desc&limit=25',
+    },
+}
+
+# Function to retrieve access token using MSAL
+def get_access_token(tenant_id: str, client_id: str, client_secret: str, scope: List[str]) -> Union[str, None]:
+    authority = f"https://login.microsoftonline.com/{tenant_id}"
+    app = ConfidentialClientApplication(
+        client_id,
+        authority=authority,
+        client_credential=client_secret,
+    )
+    result = app.acquire_token_for_client(scopes=scope)
+    if 'access_token' in result:
+        return result['access_token']
+    else:
+        print('Failed to retrieve access token:', result.get('error_description', 'Unknown error'))
+        return None
+
+# Function to call API with access token
+def call_api_data(access_token: str, query: str) -> Union[Dict[str, Any], List[Dict[str, Any]]]:
+    headers = {
+        'Authorization': f'Bearer {access_token}',
+        'Content-Type': 'application/json',
+    }
+    url = f"https://api.vodafone.dev/iot-db/{query}"
+
+    print(f"Requesting URL: {url}")  # Debug: Print URL
+    print(f"Request Headers: {headers}")  # Debug: Print headers
+
+    try:
+        response = requests.get(url, headers=headers)
+        response.raise_for_status()  # Raise an error for bad status codes
+
+        return response.json()
+    except requests.exceptions.HTTPError as http_err:
+        print(f"HTTP error occurred: {http_err}")
+        print("Response content:", response.text)  # Print full response content
+    except Exception as err:
+        print(f"Other error occurred: {err}")
+
+    return {}  # Return empty dictionary if request fails
+
+# Function to get device data
+def get_device_data(query_name: str) -> Union[List[Dict[str, Any]], Dict[str, Any]]:
+    try:
+        access_token = get_access_token(TENANT_ID, CLIENT_ID, CLIENT_SECRET, SCOPE)
+        if access_token:
+            query_config = data_endpoint_config.get(query_name, {})
+            query_fields = query_config.get('fields', '')
+            query_order = query_config.get('order', '')
+            query = f"{query_fields}{query_order}"  # Combine fields and order
+            if query:
+                return call_api_data(access_token, query)
+            else:
+                print(f'No query found for query name: {query_name}')
+        else:
+            print('Failed to retrieve access token.')
+    except Exception as e:
+        print(f'Error retrieving data: {str(e)}')
+
+    return []
+
+# Function to convert data to CSV
+def write_data_to_csv(data: List[Dict[str, Any]], filename: str):
+    if not data:
+        print("No data to write to CSV.")
+        return
+
+    # Extract header from the first data item keys
+    headers = data[0].keys()
+
+    try:
+        with open(filename, 'w', newline='', encoding='utf-8') as csvfile:
+            writer = csv.DictWriter(csvfile, fieldnames=headers)
+            writer.writeheader()
+            writer.writerows(data)
+        print(f"Data successfully written to {filename}")
+    except Exception as e:
+        print(f"Error writing to CSV: {e}")
+
+# Fetch latest device data and write to CSV
+if __name__ == '__main__':
+    query_name = 'data'  # Example query name (replace with actual query name)
+    device_data = get_device_data(query_name)
+    
+    # Check if device_data is in the expected format
+    if isinstance(device_data, dict):
+        # If data is returned as a dictionary, try to get the list of items
+        device_data = device_data.get('items', [])
+
+    # Convert to CSV
+    write_data_to_csv(device_data, 'device_data.csv')


+ 153 - 0
nokam.py

@@ -0,0 +1,153 @@
+import pygame
+import speech_recognition as sr
+from openai import OpenAI
+from pathlib import Path
+import time
+import os
+import io
+import soundfile as sf
+import sounddevice as sd
+import random
+
+# Initialize OpenAI API key
+api_key = 'sk-proj-435frgerrwwWaxim1Qt13uqzSS0xjT3BlbkFJK0rZvx78AJiWG3Ot7d3S'
+client = OpenAI(api_key=api_key)
+
+def create_messages(question, file_content):
+    return [
+        {"role": "system", "content": "Your name is Futurebot. You were created by Sooraj and team who develops innovative projects in IoT future lab at vodafone. You work with Tim, Sooraj and Priya along with other team members Laura, Sven, Thomas, Stephie. You are from T-E-T-I team. Your manager is Teja. You are a lab tour guide who explains and answers about IoT use cases in Vodafone. You have to create sentences and answer in few lines. Do not say greetings"},
+        {"role": "user", "content": f"{file_content}\n\nQ: {question}\nA:"}
+    ]
+
+def play_audio(num):
+    audio_files = {
+        1: "speech1.mp3",
+        2: "hydrosense.mp3",
+        3: "Pushtotalk.mp3",
+    }
+    
+    audio_file = audio_files.get(num)
+    
+    if audio_file:
+        pygame.mixer.init()
+        pygame.mixer.music.load(audio_file)
+        pygame.mixer.music.play()
+        while pygame.mixer.music.get_busy():
+            time.sleep(1)
+
+def recognize_speech():
+    recognizer = sr.Recognizer()
+    with sr.Microphone() as source:
+        print("Listening...")
+        
+        audio = recognizer.listen(source)
+        try:
+            print("Recognizing...")
+            
+            
+            text = recognizer.recognize_google(audio, language='en-US')
+            
+            audio_files = ["ty.mp3", "th.mp3", "good.mp3", "hmm.mp3", "haha.mp3"]
+            # Select a random audio file
+            random_audio = random.choice(audio_files)
+            
+            # Load and play the selected random audio
+            pygame.mixer.music.load(random_audio)
+            pygame.mixer.music.play()
+            
+            print(f"You said: {text}")
+            return text
+        except sr.UnknownValueError:
+            print("Sorry, I did not understand that.")
+            return None
+        except sr.RequestError:
+            print("Sorry, there was an error with the speech recognition service.")
+            return None
+
+def get_response_from_openai(messages):
+    stream = client.chat.completions.create(
+        model="gpt-4o",
+        max_tokens=150,
+        temperature=0.5,
+        messages=messages,
+        stream=True,
+    )
+    for chunk in stream:
+        if chunk.choices[0].delta.content is not None:
+            yield chunk.choices[0].delta.content
+
+def read_text_file(file_path):
+    with open(file_path, 'r') as file:
+        return file.read()
+
+def generate_speech(text):
+    if text.strip():  # Only generate speech if the text is not empty
+        spoken_response = client.audio.speech.create(
+            model="tts-1",
+            voice="alloy",
+            input=text
+        )
+
+        buffer = io.BytesIO()
+        for chunk in spoken_response.iter_bytes(chunk_size=4096):
+            buffer.write(chunk)
+        buffer.seek(0)
+
+        with sf.SoundFile(buffer, 'r') as sound_file:
+            data = sound_file.read(dtype='int16')
+            sd.play(data, sound_file.samplerate)
+            sd.wait()
+
+def start_qa_mode(file_content):
+    while True:
+        
+        question = recognize_speech()
+
+        if question and question.lower() in ["no", "go to the next showcase", "exit", "i don't have any questions", "i have no questions", "i don't have any other questions", "I don't have any other questions", "thats it"]:
+            pygame.mixer.music.load("give.mp3")
+            pygame.mixer.music.play()
+            break
+
+        if question:
+            messages = create_messages(question, file_content)
+            response_generator = get_response_from_openai(messages)
+            print("Answer: ", end="")
+            accumulated_response = ""
+            for response_chunk in response_generator:
+                accumulated_response += response_chunk
+                if '.' in response_chunk or len(accumulated_response) > 500:  # Check for sentence end or length
+                    print(accumulated_response, end="", flush=True)
+                    generate_speech(accumulated_response)
+                    accumulated_response = ""  # Reset accumulated response for the next chunk
+            if accumulated_response:  # Generate speech for any remaining text
+                print(accumulated_response, end="", flush=True)
+                generate_speech(accumulated_response)
+        else:
+            print("Sorry, I didn't get that. Please ask again.")
+
+def user_input():
+    while True:
+        try:
+            num = int(input("Enter a number from 1 to 3: "))
+            if 1 <= num <= 3:
+                return num
+            else:
+                print("Invalid input. Please enter a number between 1 and 3.")
+        except ValueError:
+            print("Invalid input. Please enter a valid integer.")
+
+def main():
+    text_files = {
+        1: "Gigabeeprotect.txt",
+        2: "Hydrosense.txt",
+        3: "Pushtotalk.txt",
+    }
+
+    while True:
+        num = user_input()
+        play_audio(num)
+        file_content = read_text_file(text_files[num])
+        start_qa_mode(file_content)
+
+if __name__ == "__main__":
+    main()


+ 35 - 0
prediction.py

@@ -0,0 +1,35 @@
+# predict_location.py
+import subprocess
+import joblib
+import pandas as pd
+
+# Function to scan Wi-Fi networks (same as in wifi_scan.py)
+def get_wifi_signal():
+    scan_output = subprocess.check_output(['sudo', 'iwlist', 'wlan0', 'scan']).decode('utf-8')
+    return scan_output
+
+# Function to parse signal strength from scan output
+def parse_signal_strength(scan_output):
+    signal_strengths = []
+    for line in scan_output.split('\n'):
+        if "Signal level=" in line:
+            strength = line.split("Signal level=")[1].split(" ")[0]
+            signal_strengths.append(int(strength))
+    return signal_strengths
+
+# Load the pre-trained model
+model = joblib.load('wifi_model.pkl')
+
+# Predict the location based on real-time signal strengths
+def predict_location():
+    scan_output = get_wifi_signal()
+    signal_strengths = parse_signal_strength(scan_output)
+
+    # Prepare the input as a DataFrame with the correct feature name
+    for strength in signal_strengths:
+        data = pd.DataFrame([[strength]], columns=['SignalStrength'])  # Use the feature name 'SignalStrength'
+        predicted_location = model.predict(data)
+        print(f'Predicted Location: {predicted_location[0]}')
+
+# Run the prediction
+predict_location()

+ 57 - 0
qrcode123.py

@@ -0,0 +1,57 @@
+import cv2
+import pygame
+from pyzbar.pyzbar import decode
+
+# Initialize pygame for audio playback
+pygame.mixer.init()
+print("Pygame initialized.")
+
+# Function to play audio
+def play_audio(file_path):
+    print(f"Attempting to play audio: {file_path}")
+    pygame.mixer.music.load(file_path)
+    pygame.mixer.music.play()
+    while pygame.mixer.music.get_busy():
+        continue
+    print("Finished playing audio.")
+
+# Capture video from camera
+cap = cv2.VideoCapture(0)
+if not cap.isOpened():
+    print("Error: Could not access the camera.")
+    exit()
+
+print("Camera accessed successfully.")
+
+while True:
+    ret, frame = cap.read()
+    if not ret:
+        print("Failed to grab frame.")
+        break
+
+    # Decode the QR code in the frame
+    decoded_objects = decode(frame)
+    if decoded_objects:
+        print(f"Decoded {len(decoded_objects)} QR code(s).")
+    
+    for obj in decoded_objects:
+        qr_data = obj.data.decode('utf-8')
+        print(f"QR Code detected, data: {qr_data}")
+
+        # Check if the QR code content is "hydrosense"
+        if qr_data == "hydrosense":
+            audio_file_path = "/home/pi/audio/hydrosense.mp3"  # Modify this path if needed
+            play_audio(audio_file_path)
+    
+    # Display the frame
+    cv2.imshow("QR Code Scanner", frame)
+
+    # Break the loop if 'q' is pressed
+    if cv2.waitKey(1) & 0xFF == ord('q'):
+        print("Exiting.")
+        break
+
+# Release the capture and close windows
+cap.release()
+cv2.destroyAllWindows()
+print("Camera released and windows closed.")

BIN
smartsanitiser.mp3


+ 49 - 0
smartsanitiserdispenser.txt

@@ -0,0 +1,49 @@
+Overview of Smart Sanitizer Dispenser Solution
+The Smart Sanitizer Dispenser is an IoT-based solution designed to enhance workplace hygiene by automating sanitizer dispenser management. This system ensures that sanitizer dispensers are always functional and filled, providing a reliable health security measure for companies of all sizes.
+
+Background and Purpose
+Ensuring compliance with pandemic-related health guidelines is a challenge for many companies, especially those with large workspaces like the Vodafone campus. Manual monitoring and refilling of sanitizer dispensers are labor-intensive and inefficient. The Smart Sanitizer Dispenser aims to:
+
+Automate Fill Level Monitoring: Eliminate the need for manual checks.
+Enhance Workplace Hygiene: Ensure dispensers are always filled and functional.
+Optimize Dispenser Placement: Use data analytics to improve dispenser locations.
+Technical Details
+Hardware Components:
+Microcontroller: Measures and calculates the sanitizer usage.
+Sensors: Detect the fill level of the dispenser.
+Connectivity Module: Transmits data over Narrowband-IoT (NB-IoT).
+Battery: Powers the device for continuous operation.
+Software:
+Firmware: Developed to efficiently monitor and transmit usage data.
+Dashboard: Provides a user-friendly interface for monitoring dispenser status and usage analytics.
+Connectivity
+Narrowband-IoT (NB-IoT): Ensures reliable data transmission even in areas with poor network coverage.
+Continuous Monitoring: Regularly updates fill level status and usage statistics.
+Solution Description
+The Smart Sanitizer Dispenser automates the monitoring process, ensuring that dispensers are always ready for use. Key features include:
+
+Automated Fill Level Control: Housekeeping staff can check fill levels via a dashboard, eliminating manual checks.
+Usage Analytics: Tracks how often dispensers are used, helping to optimize their placement and predict refill dates.
+Real-time Alerts: Notifies staff when a refill is needed or if there are any issues with the dispenser.
+Application Opportunities
+The smart fill level tracking solution is versatile and can be applied to various types of dispensers beyond sanitizers, offering potential for more complex Industry 4.0 applications:
+
+Office Buildings: Maintains hygiene in large corporate environments.
+Public Spaces: Ensures dispensers in high-traffic areas are always functional.
+Industrial Settings: Can be adapted for other dispensing needs in manufacturing and logistics.
+Project Timeline
+04/21: Project inception and initial design phase.
+12/07/21: Completion of initial showcase.
+Ongoing: Further evaluation and potential expansion to other types of dispensers.
+Future Development
+The Smart Sanitizer Dispenser solution has completed its prototype phase and is currently in use. Future developments include:
+
+Expansion: Adapting the technology for use with other dispensers.
+Enhanced Analytics: Improving data analytics capabilities for better insights.
+Integration: Potential integration with other smart building solutions.
+Key Points of Contact
+For further information or to express interest in the Smart Sanitizer Dispenser solution, the main point of contact is:
+
+SPOC: Leon Kersten
+Conclusion
+The Smart Sanitizer Dispenser offers a robust and efficient solution for maintaining hygiene in workplaces and public spaces. By automating fill level monitoring and providing real-time usage data, the system ensures that dispensers are always ready for use, enhancing health security and operational efficiency.




+ 28 - 0
test.py

@@ -0,0 +1,28 @@
+# train_model.py
+import pandas as pd
+from sklearn.model_selection import train_test_split
+from sklearn.ensemble import RandomForestClassifier
+from sklearn.metrics import accuracy_score
+import joblib
+
+# Load the collected data from CSV
+data = pd.read_csv('wifi_signals.csv')
+
+# Features (SignalStrength) and target (Location)
+X = data[['SignalStrength']]
+y = data['Location']
+
+# Split the data into training and test sets (80% train, 20% test)
+X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
+
+# Train a Random Forest Classifier
+model = RandomForestClassifier()
+model.fit(X_train, y_train)
+
+# Evaluate the model
+y_pred = model.predict(X_test)
+accuracy = accuracy_score(y_test, y_pred)
+print(f'Model Accuracy: {accuracy * 100:.2f}%')
+
+# Save the trained model to a file
+joblib.dump(model, 'wifi_model.pkl')

+ 58 - 0
testcamera.py

@@ -0,0 +1,58 @@
+import cv2
+from picamera2 import Picamera2
+from pyzbar.pyzbar import decode
+import pygame
+import time
+
+# Initialize the camera and pygame for playing audio
+picam2 = Picamera2()
+picam2.start()
+pygame.mixer.init()
+
+def play_audio(mp3_file):
+    """Plays the specified MP3 file."""
+    try:
+        pygame.mixer.music.load(mp3_file)
+        pygame.mixer.music.play()
+        while pygame.mixer.music.get_busy():
+            time.sleep(1)
+    except Exception as e:
+        print(f"Error playing audio: {e}")
+
+def scan_qr_code(image):
+    """Scans the QR code from the given image and returns the decoded data."""
+    qr_codes = decode(image)
+    if qr_codes:
+        # Extract the string from the QR code
+        qr_data = qr_codes[0].data.decode('utf-8')
+        print("QR Code detected:", qr_data)
+        return qr_data
+    return None
+
+while True:
+    # Capture an image from the camera
+    image = picam2.capture_array()
+    
+    # Convert the image to grayscale (QR code scanning works better in grayscale)
+    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
+    
+    # Scan for QR code in the image
+    qr_data = scan_qr_code(gray)
+    
+    if qr_data:
+        # If QR code contains "hydrosense", play the corresponding MP3
+        if qr_data.lower() == "hydrosense":  # Case insensitive comparison
+            print("Playing Hydrosense.mp3")
+            play_audio("Hydrosense.mp3")  # Make sure the MP3 file is in the correct path
+            
+    # Display the image (for debugging purposes)
+    cv2.imshow('QR Code Scanner', image)
+    
+    # Exit the loop if 'q' is pressed
+    if cv2.waitKey(1) & 0xFF == ord('q'):
+        break
+
+# Cleanup
+picam2.stop()
+cv2.destroyAllWindows()
+pygame.quit()

+ 69 - 0
testing.py

@@ -0,0 +1,69 @@
+import pygame
+import time
+from openai import OpenAI
+
+# Initialize OpenAI API key
+api_key = 'sk-proj-wwWaxim1Qt13uq5645dfghvbnzSS0xjT3BlbkFJK0rZvx78AJiWG3Ot7d3S'
+client = OpenAI(api_key=api_key)
+
+def user_input():
+    while True:
+        try:
+            num = int(input("Enter a number from 1 to 3: "))
+            if 1 <= num <= 3:
+                return num
+            else:
+                print("Invalid input. Please enter a number between 1 and 3.")
+        except ValueError:
+            print("Invalid input. Please enter a valid integer.")
+
+def play_audio(num):
+    audio_files = {
+        1: "Gigabee.mp3",
+        2: "hydrosense.mp3",
+        3: "Pushtotalk.mp3",
+    }
+    
+    text_files = {
+        1: "Gigabeeprotect.txt",
+        2: "Hydrosense.txt",
+        3: "Pushtotalk.txt",
+    }
+
+    audio_file = audio_files.get(num)
+    text_file = text_files.get(num)
+    
+    if audio_file and text_file:
+        pygame.mixer.init()
+        pygame.mixer.music.load(audio_file)
+        pygame.mixer.music.play()
+        while pygame.mixer.music.get_busy():
+            time.sleep(1)
+
+        # Prompt for Q&A mode
+        user_choice = input("Would you like to enter Q&A mode? (yes/no): ").strip().lower()
+        if user_choice == 'yes':
+            with open(text_file, 'r') as file:
+                context = file.read()
+            start_qa_mode(context)
+
+def start_qa_mode(context):
+    while True:
+        question = input("Ask a question (or type 'exit' to quit): ").strip()
+        if question.lower() == 'exit':
+            break
+        
+        response = client.chat.completions.create(
+            engine="text-davinci-003",  # You can choose another engine if needed
+            prompt=f"{context}\n\nQuestion: {question}\nAnswer:",
+            max_tokens=150
+        )
+        
+        answer = (response.choices[0].message.content)
+        print(f"Answer: {answer}")
+
+if __name__ == "__main__":
+    while True:
+        number = user_input()
+        play_audio(number)
+        # Loop continues to ask for new user input after Q&A mode ends or if no Q&A mode is entered




+ 33 - 0
wifi_scan.py

@@ -0,0 +1,33 @@
+import subprocess
+import csv
+import time
+
+def get_wifi_signal():
+    scan_output = subprocess.check_output(['sudo', 'iwlist', 'wlan0', 'scan']).decode('utf-8')
+    return scan_output
+
+def parse_signal_strength(scan_output):
+    signal_strengths = []
+    for line in scan_output.split('\n'):
+        if "Signal level=" in line:
+            strength = line.split("Signal level=")[1].split(" ")[0]
+            signal_strengths.append(int(strength))
+    return signal_strengths
+
+def collect_data(file_path, location_id):
+    with open(file_path, 'a', newline='') as csvfile:
+        fieldnames = ['Location', 'SignalStrength']
+        writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
+        if csvfile.tell() == 0:
+            writer.writeheader()
+        scan_output = get_wifi_signal()
+        signal_strengths = parse_signal_strength(scan_output)
+        for strength in signal_strengths:
+            writer.writerow({'Location': location_id, 'SignalStrength': strength})
+
+# Example usage
+location_id = 'Showcase1'  # Change to unique ID for each location
+file_path = 'wifi_signals.csv'
+while True:
+    collect_data(file_path, location_id)
+    time.sleep(60)  # Collect data every minute

+ 568 - 0
wifi_signals.csv

@@ -0,0 +1,568 @@
+Location,SignalStrength
+Showcase1,-51
+Showcase1,-40
+Showcase1,-40
+Showcase1,-58
+Showcase1,-50
+Showcase1,-50
+Showcase1,-55
+Showcase1,-83
+Showcase1,-82
+Showcase1,-50
+Showcase1,-50
+Showcase1,-69
+Showcase1,-69
+Showcase1,-50
+Showcase1,-69
+Showcase1,-65
+Showcase1,-69
+Showcase1,-70
+Showcase1,-67
+Showcase1,-67
+Showcase1,-67
+Showcase1,-81
+Showcase1,-81
+Showcase1,-81
+Showcase1,-71
+Showcase1,-86
+Showcase1,-83
+Showcase1,-83
+Showcase1,-83
+Showcase1,-85
+Showcase1,-52
+Showcase1,-73
+Showcase1,-70
+Showcase1,-71
+Showcase1,-83
+Showcase1,-70
+Showcase1,-82
+Showcase1,-81
+Showcase1,-52
+Showcase1,-52
+Showcase1,-54
+Showcase1,-52
+Showcase1,-73
+Showcase1,-76
+Showcase1,-40
+Showcase1,-48
+Showcase1,-40
+Showcase1,-70
+Showcase1,-57
+Showcase1,-59
+Showcase1,-68
+Showcase1,-69
+Showcase1,-50
+Showcase1,-44
+Showcase1,-69
+Showcase1,-69
+Showcase1,-45
+Showcase1,-69
+Showcase1,-63
+Showcase1,-69
+Showcase1,-69
+Showcase1,-82
+Showcase1,-83
+Showcase1,-84
+Showcase1,-66
+Showcase1,-66
+Showcase1,-66
+Showcase1,-83
+Showcase1,-81
+Showcase1,-81
+Showcase1,-82
+Showcase1,-83
+Showcase1,-51
+Showcase1,-54
+Showcase1,-40
+Showcase1,-41
+Showcase1,-50
+Showcase1,-67
+Showcase1,-68
+Showcase1,-73
+Showcase1,-53
+Showcase1,-52
+Showcase1,-56
+Showcase1,-52
+Showcase1,-73
+Showcase1,-73
+Showcase1,-73
+Showcase1,-69
+Showcase1,-69
+Showcase1,-69
+Showcase1,-66
+Showcase1,-76
+Showcase1,-73
+Showcase1,-85
+Showcase1,-85
+Showcase1,-89
+Showcase1,-40
+Showcase1,-40
+Showcase1,-67
+Showcase1,-67
+Showcase1,-67
+Showcase1,-79
+Showcase1,-82
+Showcase1,-81
+Showcase1,-51
+Showcase1,-56
+Showcase1,-39
+Showcase1,-40
+Showcase1,-50
+Showcase1,-62
+Showcase1,-64
+Showcase1,-72
+Showcase1,-51
+Showcase1,-51
+Showcase1,-56
+Showcase1,-51
+Showcase1,-72
+Showcase1,-70
+Showcase1,-68
+Showcase1,-68
+Showcase1,-68
+Showcase1,-66
+Showcase1,-68
+Showcase1,-43
+Showcase1,-43
+Showcase1,-66
+Showcase1,-66
+Showcase1,-66
+Showcase1,-82
+Showcase1,-83
+Showcase1,-61
+Showcase1,-61
+Showcase1,-65
+Showcase1,-80
+Showcase1,-85
+Showcase1,-85
+Showcase1,-85
+Showcase1,-81
+Showcase1,-82
+Showcase1,-82
+Showcase1,-82
+Showcase1,-51
+Showcase1,-39
+Showcase1,-38
+Showcase1,-47
+Showcase1,-65
+Showcase1,-64
+Showcase1,-71
+Showcase1,-51
+Showcase1,-51
+Showcase1,-51
+Showcase1,-71
+Showcase1,-70
+Showcase1,-69
+Showcase1,-69
+Showcase1,-69
+Showcase1,-65
+Showcase1,-68
+Showcase1,-43
+Showcase1,-43
+Showcase1,-66
+Showcase1,-66
+Showcase1,-65
+Showcase1,-81
+Showcase1,-82
+Showcase1,-62
+Showcase1,-55
+Showcase1,-65
+Showcase1,-81
+Showcase1,-84
+Showcase1,-81
+Showcase1,-71
+Showcase1,-77
+Showcase1,-85
+Showcase1,-85
+Showcase1,-80
+Showcase1,-81
+Showcase1,-51
+Showcase1,-38
+Showcase1,-40
+Showcase1,-53
+Showcase1,-65
+Showcase1,-74
+Showcase1,-55
+Showcase1,-52
+Showcase1,-51
+Showcase1,-68
+Showcase1,-68
+Showcase1,-68
+Showcase1,-66
+Showcase1,-70
+Showcase1,-45
+Showcase1,-44
+Showcase1,-66
+Showcase1,-66
+Showcase1,-66
+Showcase1,-80
+Showcase1,-85
+Showcase1,-62
+Showcase1,-60
+Showcase1,-65
+Showcase1,-83
+Showcase1,-80
+Showcase1,-70
+Showcase1,-85
+Showcase1,-77
+Showcase1,-84
+Showcase1,-83
+Showcase1,-82
+Showcase1,-83
+Showcase1,-80
+Showcase1,-80
+Showcase1,-80
+Showcase1,-53
+Showcase1,-39
+Showcase1,-39
+Showcase1,-47
+Showcase1,-68
+Showcase1,-68
+Showcase1,-51
+Showcase1,-52
+Showcase1,-68
+Showcase1,-68
+Showcase1,-68
+Showcase1,-66
+Showcase1,-63
+Showcase1,-45
+Showcase1,-45
+Showcase1,-65
+Showcase1,-65
+Showcase1,-66
+Showcase1,-83
+Showcase1,-84
+Showcase1,-70
+Showcase1,-68
+Showcase1,-82
+Showcase1,-79
+Showcase1,-87
+Showcase1,-87
+Showcase1,-82
+Showcase1,-65
+Showcase1,-56
+Showcase1,-75
+Showcase1,-75
+Showcase1,-83
+Showcase1,-80
+Showcase1,-79
+Showcase1,-51
+Showcase1,-36
+Showcase1,-36
+Showcase1,-48
+Showcase1,-65
+Showcase1,-73
+Showcase1,-52
+Showcase1,-52
+Showcase1,-72
+Showcase1,-72
+Showcase1,-72
+Showcase1,-67
+Showcase1,-65
+Showcase1,-46
+Showcase1,-45
+Showcase1,-66
+Showcase1,-66
+Showcase1,-66
+Showcase1,-81
+Showcase1,-66
+Showcase1,-65
+Showcase1,-80
+Showcase1,-84
+Showcase1,-81
+Showcase1,-64
+Showcase1,-68
+Showcase1,-78
+Showcase1,-55
+Showcase1,-61
+Showcase1,-55
+Showcase1,-69
+Showcase1,-84
+Showcase1,-81
+Showcase1,-84
+Showcase1,-87
+Showcase1,-80
+Showcase1,-80
+Showcase1,-51
+Showcase1,-38
+Showcase1,-38
+Showcase1,-48
+Showcase1,-67
+Showcase1,-73
+Showcase1,-51
+Showcase1,-51
+Showcase1,-67
+Showcase1,-67
+Showcase1,-67
+Showcase1,-65
+Showcase1,-66
+Showcase1,-44
+Showcase1,-45
+Showcase1,-67
+Showcase1,-67
+Showcase1,-67
+Showcase1,-81
+Showcase1,-71
+Showcase1,-67
+Showcase1,-80
+Showcase1,-84
+Showcase1,-80
+Showcase1,-65
+Showcase1,-74
+Showcase1,-79
+Showcase1,-58
+Showcase1,-84
+Showcase1,-81
+Showcase1,-85
+Showcase1,-83
+Showcase1,-75
+Showcase1,-85
+Showcase1,-79
+Showcase1,-50
+Showcase1,-38
+Showcase1,-38
+Showcase1,-50
+Showcase1,-72
+Showcase1,-53
+Showcase1,-53
+Showcase1,-70
+Showcase1,-70
+Showcase1,-70
+Showcase1,-64
+Showcase1,-73
+Showcase1,-47
+Showcase1,-47
+Showcase1,-68
+Showcase1,-68
+Showcase1,-68
+Showcase1,-81
+Showcase1,-67
+Showcase1,-68
+Showcase1,-81
+Showcase1,-85
+Showcase1,-80
+Showcase1,-74
+Showcase1,-58
+Showcase1,-83
+Showcase1,-85
+Showcase1,-86
+Showcase1,-84
+Showcase1,-85
+Showcase1,-79
+Showcase1,-51
+Showcase1,-85
+Showcase1,-87
+Showcase1,-79
+Showcase1,-51
+Showcase1,-36
+Showcase1,-36
+Showcase1,-50
+Showcase1,-72
+Showcase1,-52
+Showcase1,-52
+Showcase1,-71
+Showcase1,-71
+Showcase1,-70
+Showcase1,-66
+Showcase1,-70
+Showcase1,-42
+Showcase1,-42
+Showcase1,-68
+Showcase1,-68
+Showcase1,-67
+Showcase1,-84
+Showcase1,-64
+Showcase1,-85
+Showcase1,-85
+Showcase1,-86
+Showcase1,-80
+Showcase1,-82
+Showcase1,-83
+Showcase1,-88
+Showcase1,-86
+Showcase1,-77
+Showcase1,-55
+Showcase1,-80
+Showcase1,-70
+Showcase1,-58
+Showcase1,-57
+Showcase1,-72
+Showcase1,-82
+Showcase1,-51
+Showcase1,-40
+Showcase1,-40
+Showcase1,-50
+Showcase1,-68
+Showcase1,-54
+Showcase1,-54
+Showcase1,-69
+Showcase1,-69
+Showcase1,-69
+Showcase1,-63
+Showcase1,-73
+Showcase1,-44
+Showcase1,-44
+Showcase1,-68
+Showcase1,-68
+Showcase1,-67
+Showcase1,-83
+Showcase1,-57
+Showcase1,-84
+Showcase1,-87
+Showcase1,-82
+Showcase1,-84
+Showcase1,-86
+Showcase1,-85
+Showcase1,-58
+Showcase1,-82
+Showcase1,-61
+Showcase1,-56
+Showcase1,-55
+Showcase1,-70
+Showcase1,-69
+Showcase1,-74
+Showcase1,-68
+Showcase1,-76
+Showcase1,-87
+Showcase1,-72
+Showcase1,-82
+Showcase1,-81
+Showcase1,-56
+Showcase1,-72
+Showcase1,-58
+Showcase1,-51
+Showcase1,-52
+Showcase1,-76
+Showcase1,-38
+Showcase1,-38
+Showcase1,-48
+Showcase1,-64
+Showcase1,-64
+Showcase1,-54
+Showcase1,-68
+Showcase1,-68
+Showcase1,-67
+Showcase1,-51
+Showcase1,-84
+Showcase1,-68
+Showcase1,-68
+Showcase1,-68
+Showcase1,-77
+Showcase1,-65
+Showcase1,-72
+Showcase1,-83
+Showcase1,-84
+Showcase1,-86
+Showcase1,-51
+Showcase1,-62
+Showcase1,-63
+Showcase1,-63
+Showcase1,-51
+Showcase1,-80
+Showcase1,-80
+Showcase1,-80
+Showcase1,-80
+Showcase1,-79
+Showcase1,-50
+Showcase1,-84
+Showcase1,-38
+Showcase1,-47
+Showcase1,-39
+Showcase1,-63
+Showcase1,-72
+Showcase1,-70
+Showcase1,-70
+Showcase1,-73
+Showcase1,-77
+Showcase1,-54
+Showcase1,-57
+Showcase1,-60
+Showcase1,-81
+Showcase1,-74
+Showcase1,-54
+Showcase1,-79
+Showcase1,-83
+Showcase1,-70
+Showcase1,-70
+Showcase1,-70
+Showcase1,-67
+Showcase1,-73
+Showcase1,-85
+Showcase1,-83
+Showcase1,-45
+Showcase1,-45
+Showcase1,-83
+Showcase1,-83
+Showcase1,-81
+Showcase1,-83
+Showcase1,-82
+Showcase1,-52
+Showcase1,-39
+Showcase1,-50
+Showcase1,-38
+Showcase1,-64
+Showcase1,-72
+Showcase1,-54
+Showcase1,-54
+Showcase1,-71
+Showcase1,-54
+Showcase1,-85
+Showcase1,-70
+Showcase1,-70
+Showcase1,-70
+Showcase1,-65
+Showcase1,-71
+Showcase1,-83
+Showcase1,-75
+Showcase1,-43
+Showcase1,-43
+Showcase1,-83
+Showcase1,-84
+Showcase1,-83
+Showcase1,-68
+Showcase1,-55
+Showcase1,-72
+Showcase1,-75
+Showcase1,-66
+Showcase1,-66
+Showcase1,-66
+Showcase1,-77
+Showcase1,-85
+Showcase1,-80
+Showcase1,-81
+Showcase1,-50
+Showcase1,-41
+Showcase1,-49
+Showcase1,-39
+Showcase1,-67
+Showcase1,-74
+Showcase1,-53
+Showcase1,-55
+Showcase1,-75
+Showcase1,-53
+Showcase1,-84
+Showcase1,-69
+Showcase1,-68
+Showcase1,-68
+Showcase1,-66
+Showcase1,-73
+Showcase1,-84
+Showcase1,-45
+Showcase1,-45
+Showcase1,-82
+Showcase1,-82
+Showcase1,-82
+Showcase1,-75
+Showcase1,-74
+Showcase1,-64
+Showcase1,-64
+Showcase1,-65
+Showcase1,-77
+Showcase1,-84
+Showcase1,-70
+Showcase1,-70
+Showcase1,-57
+Showcase1,-81
+Showcase1,-85
+Showcase1,-86
+Showcase1,-85