22 Commits

Author SHA1 Message Date
eba64617d9 fix: listener count not broadcasted initially 2024-11-26 18:10:13 +00:00
a50cf023d6 chore: update README 2024-11-25 17:56:12 +00:00
bd4827dd99 refactor: use websocket again 2024-11-25 17:53:37 +00:00
b50190f67e chore: update startup script 2024-11-25 02:04:03 +00:00
624dc4c08a chore: update README 2024-11-25 02:01:13 +00:00
272558e59a fix: buggy space bar control 2024-11-25 00:35:09 +00:00
087af66434 refactor: use sse instead of ws for listener count 2024-11-25 00:22:44 +00:00
0ef8bc9e36 feat: make cat animation a gif 2024-11-24 15:50:20 +00:00
bb56cc0eff Merge pull request #14 from harrowmykel/open-fal-ai-in-new-tab
open external urls in new tab
2024-11-24 14:53:58 +00:00
harrowmykel
f926a883a7 open external urls in new tab 2024-11-18 17:18:54 +01:00
bec523ce6b fix: decrease polling rate 2024-09-03 23:36:04 +01:00
ba62f3bfa2 feat: add footer links and fal logo 2024-08-26 17:27:51 +01:00
498fa97a86 fix: read fal api key from environ 2024-08-26 12:24:09 +01:00
280aa1dcb4 Merge pull request #12 from kennethnym/fal-integration
Add code for fal.ai integration
2024-08-25 17:08:48 +01:00
3299f2dcc6 fix: change local inference server url 2024-08-22 23:50:58 +01:00
b398f4d025 fix: remove unused websocket obj in web server 2024-08-22 23:50:17 +01:00
58b47201a6 refactor: use http polling instead of websocket 2024-08-22 23:48:16 +01:00
13ae315b7d fix: write generated audio to fal storage 2024-08-20 21:12:02 +01:00
8e76c91c72 Merge branch 'main' into fal-integration 2024-08-20 20:53:52 +01:00
74d2378ef4 chore: remove redundant prompt strings 2024-07-26 18:12:40 +01:00
37cf800d8d feat: add websocket endpoint for fal 2024-07-25 11:00:52 +01:00
0e744af73f feat: initial code for fal.ai integration 2024-07-25 10:36:24 +01:00
16 changed files with 442 additions and 150 deletions

View File

@@ -12,27 +12,17 @@ the frontend is written using pure HTML/CSS/JS with no external dependencies. it
## inference
infinifi consists of two parts, the inference server and the web server. the two servers are connected via a websocket connection. whenever the web server requires new audio clips to be generated, it sends a "generate" message to the inference server, which triggers the generation on the inference server. when the generation is done, the inference server sends back the audio in mp3 format back to the web server over websocket. once the web server receives the mp3 data, it saves them locally as mp3 files.
infinifi consists of two parts, the inference server and the web server. 5 audio clips are generated each time an inference request is received from the web server. the web server will request for an inference every set interval. after the request is made, it polls the inference server until the audio is generated and available for download. it then downloads the 5 generated clips and saves it locally. at most 10 clips are saved at a time.
when the inference server is down, the web server will recycle saved clips until it is back up again.
## requirements
- python >= 3.10 (tested with python 3.12 only)
## running it yourself
you are welcome to self host infinifi. to get started, make sure that:
- you are using python 3.9/3.10;
- port 8001 (used by the inference server) is free.
then follow the steps below:
1. install `torch==2.1.0`
2. install deps from both `requirements-inference.txt` and `requirements-server.txt`
3. run the inference server: `python inference_server.py`
4. run the web server: `fastapi run server.py --port ${ANY_PORT_NUMBER}`
you can also run the inference server and the web server on separate machines. to let the web server know where to connect to the inference server, specify the `INFERENCE_SERVER_WS_URL` environment variable when running the web server:
```
INFERENCE_SERVER_WS_URL=ws://2.3.1.4:1938 fastapi run server.py --port ${ANY_PORT_NUMBER}
```
i have recently changed the networking between the web server and the inference server. at the moment, the inference happens on fal infrastructure (`fal_app.py`), and i have yet to update the standalone inference server code `inference_server.py` to match the new architecture.
## feature requests

73
fal_app.py Normal file
View File

@@ -0,0 +1,73 @@
import datetime
from pathlib import Path
import threading
from audiocraft.data.audio import audio_write
import fal
from fastapi import Response, status
import torch
DATA_DIR = Path("/data/audio")
PROMPTS = [
"Create a futuristic lo-fi beat that blends modern electronic elements with synthwave influences. Incorporate smooth, atmospheric synths and gentle, relaxing rhythms to evoke a sense of a serene, neon-lit future. Ensure the track is continuous with no background noise or interruptions, maintaining a calm and tranquil atmosphere throughout while adding a touch of retro-futuristic vibes.",
"gentle lo-fi beat with a smooth, mellow piano melody in the background. Ensure there are no background noises or interruptions, maintaining a continuous and seamless flow throughout the track. The beat should be relaxing and tranquil, perfect for a calm and reflective atmosphere.",
"Create an earthy lo-fi beat that evokes a natural, grounded atmosphere. Incorporate organic sounds like soft percussion, rustling leaves, and gentle acoustic instruments. The track should have a warm, soothing rhythm with a continuous flow and no background noise or interruptions, maintaining a calm and reflective ambiance throughout.",
"Create a soothing lo-fi beat featuring gentle, melodic guitar riffs. The guitar should be the focal point, supported by subtle, ambient electronic elements and a smooth, relaxed rhythm. Ensure the track is continuous with no background noise or interruptions, maintaining a warm and mellow atmosphere throughout.",
"Create an ambient lo-fi beat with a tranquil and ethereal atmosphere. Use soft, atmospheric pads, gentle melodies, and minimalistic percussion to evoke a sense of calm and serenity. Ensure the track is continuous with no background noise or interruptions, maintaining a soothing and immersive ambiance throughout.",
]
class InfinifiFalApp(fal.App, keep_alive=300):
machine_type = "GPU-A6000"
requirements = [
"torch==2.1.0",
"audiocraft==1.3.0",
"torchaudio==2.1.0",
"websockets==11.0.3",
"numpy==1.26.4",
]
__is_generating = False
def setup(self):
import torchaudio
from audiocraft.models.musicgen import MusicGen
self.model = MusicGen.get_pretrained("facebook/musicgen-large")
self.model.set_generation_params(duration=60)
@fal.endpoint("/generate")
def run(self):
if self.__is_generating:
return Response(status_code=status.HTTP_409_CONFLICT)
threading.Thread(target=self.__generate_audio).start()
@fal.endpoint("/clips/{index}")
def get_clips(self, index):
if self.__is_generating:
return Response(status_code=status.HTTP_404_NOT_FOUND)
path = DATA_DIR.joinpath(f"{index}")
with open(path.with_suffix(".mp3"), "rb") as f:
data = f.read()
return Response(content=data)
def __generate_audio(self):
self.__is_generating = True
print(f"[INFO] {datetime.datetime.now()}: generating audio...")
wav = self.model.generate(PROMPTS)
for i, one_wav in enumerate(wav):
path = DATA_DIR.joinpath(f"{i}")
audio_write(
path,
one_wav.cpu(),
self.model.sample_rate,
format="mp3",
strategy="loudness",
loudness_compressor=True,
make_parent_dir=True,
)
self.__is_generating = False

View File

@@ -2,22 +2,17 @@ import torchaudio
from audiocraft.models.musicgen import MusicGen
from audiocraft.data.audio import audio_write
from prompts import PROMPTS
MODEL_NAME = "facebook/musicgen-large"
MUSIC_DURATION_SECONDS = 60
model = MusicGen.get_pretrained(MODEL_NAME)
model.set_generation_params(duration=MUSIC_DURATION_SECONDS)
descriptions = [
"Create a futuristic lo-fi beat that blends modern electronic elements with synthwave influences. Incorporate smooth, atmospheric synths and gentle, relaxing rhythms to evoke a sense of a serene, neon-lit future. Ensure the track is continuous with no background noise or interruptions, maintaining a calm and tranquil atmosphere throughout while adding a touch of retro-futuristic vibes.",
"gentle lo-fi beat with a smooth, mellow piano melody in the background. Ensure there are no background noises or interruptions, maintaining a continuous and seamless flow throughout the track. The beat should be relaxing and tranquil, perfect for a calm and reflective atmosphere.",
"Create an earthy lo-fi beat that evokes a natural, grounded atmosphere. Incorporate organic sounds like soft percussion, rustling leaves, and gentle acoustic instruments. The track should have a warm, soothing rhythm with a continuous flow and no background noise or interruptions, maintaining a calm and reflective ambiance throughout.",
"Create a soothing lo-fi beat featuring gentle, melodic guitar riffs. The guitar should be the focal point, supported by subtle, ambient electronic elements and a smooth, relaxed rhythm. Ensure the track is continuous with no background noise or interruptions, maintaining a warm and mellow atmosphere throughout.",
"Create an ambient lo-fi beat with a tranquil and ethereal atmosphere. Use soft, atmospheric pads, gentle melodies, and minimalistic percussion to evoke a sense of calm and serenity. Ensure the track is continuous with no background noise or interruptions, maintaining a soothing and immersive ambiance throughout.",
]
def generate(offset=0):
wav = model.generate(descriptions)
wav = model.generate(PROMPTS)
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.

View File

@@ -3,6 +3,8 @@ import time
from audiocraft.models.musicgen import MusicGen
from audiocraft.data.audio import audio_write
from prompts import PROMPTS
MODEL_NAME = "facebook/musicgen-large"
MUSIC_DURATION_SECONDS = 60
@@ -10,18 +12,11 @@ print("obtaining model...")
model = MusicGen.get_pretrained(MODEL_NAME)
model.set_generation_params(duration=MUSIC_DURATION_SECONDS)
descriptions = [
"Create a futuristic lo-fi beat that blends modern electronic elements with synthwave influences. Incorporate smooth, atmospheric synths and gentle, relaxing rhythms to evoke a sense of a serene, neon-lit future. Ensure the track is continuous with no background noise or interruptions, maintaining a calm and tranquil atmosphere throughout while adding a touch of retro-futuristic vibes.",
"gentle lo-fi beat with a smooth, mellow piano melody in the background. Ensure there are no background noises or interruptions, maintaining a continuous and seamless flow throughout the track. The beat should be relaxing and tranquil, perfect for a calm and reflective atmosphere.",
"Create an earthy lo-fi beat that evokes a natural, grounded atmosphere. Incorporate organic sounds like soft percussion, rustling leaves, and gentle acoustic instruments. The track should have a warm, soothing rhythm with a continuous flow and no background noise or interruptions, maintaining a calm and reflective ambiance throughout.",
"Create a soothing lo-fi beat featuring gentle, melodic guitar riffs. The guitar should be the focal point, supported by subtle, ambient electronic elements and a smooth, relaxed rhythm. Ensure the track is continuous with no background noise or interruptions, maintaining a warm and mellow atmosphere throughout.",
"Create an ambient lo-fi beat with a tranquil and ethereal atmosphere. Use soft, atmospheric pads, gentle melodies, and minimalistic percussion to evoke a sense of calm and serenity. Ensure the track is continuous with no background noise or interruptions, maintaining a soothing and immersive ambiance throughout.",
]
print("model obtained. generating audio...")
a = time.time()
wav = model.generate(descriptions)
wav = model.generate(PROMPTS)
b = time.time()
print(f"audio generated. took {b - a} seconds.")

19
listener_counter.py Normal file
View File

@@ -0,0 +1,19 @@
import threading
class ListenerCounter:
def __init__(self) -> None:
self.__listener = set()
self.__lock = threading.Lock()
def add_listener(self, listener_id: str):
with self.__lock:
self.__listener.add(listener_id)
def remove_listener(self, listener_id: str):
with self.__lock:
self.__listener.discard(listener_id)
def count(self) -> int:
with self.__lock:
return len(self.__listener)

7
prompts.py Normal file
View File

@@ -0,0 +1,7 @@
PROMPTS = [
"Create a futuristic lo-fi beat that blends modern electronic elements with synthwave influences. Incorporate smooth, atmospheric synths and gentle, relaxing rhythms to evoke a sense of a serene, neon-lit future. Ensure the track is continuous with no background noise or interruptions, maintaining a calm and tranquil atmosphere throughout while adding a touch of retro-futuristic vibes.",
"gentle lo-fi beat with a smooth, mellow piano melody in the background. Ensure there are no background noises or interruptions, maintaining a continuous and seamless flow throughout the track. The beat should be relaxing and tranquil, perfect for a calm and reflective atmosphere.",
"Create an earthy lo-fi beat that evokes a natural, grounded atmosphere. Incorporate organic sounds like soft percussion, rustling leaves, and gentle acoustic instruments. The track should have a warm, soothing rhythm with a continuous flow and no background noise or interruptions, maintaining a calm and reflective ambiance throughout.",
"Create a soothing lo-fi beat featuring gentle, melodic guitar riffs. The guitar should be the focal point, supported by subtle, ambient electronic elements and a smooth, relaxed rhythm. Ensure the track is continuous with no background noise or interruptions, maintaining a warm and mellow atmosphere throughout.",
"Create an ambient lo-fi beat with a tranquil and ethereal atmosphere. Use soft, atmospheric pads, gentle melodies, and minimalistic percussion to evoke a sense of calm and serenity. Ensure the track is continuous with no background noise or interruptions, maintaining a soothing and immersive ambiance throughout.",
]

View File

@@ -1,2 +1,4 @@
fastapi==0.111.1
websocket_client==1.8.0
fastapi==0.115.5
websockets==14.1
logger==1.4
Requests==2.32.3

123
server.py
View File

@@ -1,11 +1,17 @@
import threading
import os
from time import sleep
import requests
import websocket
from contextlib import asynccontextmanager
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
from fastapi import (
FastAPI,
WebSocket,
status,
)
from fastapi.responses import FileResponse
from fastapi.staticfiles import StaticFiles
from listener_counter import ListenerCounter
from logger import log_info, log_warn
from websocket_connection_manager import WebSocketConnectionManager
@@ -13,33 +19,32 @@ from websocket_connection_manager import WebSocketConnectionManager
current_index = -1
# the timer that periodically advances the current audio track
t = None
# websocket connection to the inference server
ws = None
ws_url = ""
inference_url = ""
api_key = ""
ws_connection_manager = WebSocketConnectionManager()
active_listeners = set()
listener_counter = ListenerCounter()
@asynccontextmanager
async def lifespan(app: FastAPI):
global ws, ws_url
global ws, inference_url, api_key
ws_url = os.environ.get("INFERENCE_SERVER_WS_URL")
if not ws_url:
ws_url = "ws://localhost:8001"
inference_url = os.environ.get("INFERENCE_SERVER_URL")
api_key = os.environ.get("API_KEY")
if not inference_url:
inference_url = "http://localhost:8001"
advance()
yield
if ws:
ws.close()
if t:
t.cancel()
def generate_new_audio():
if not ws_url:
if not inference_url:
return
global current_index
@@ -52,31 +57,61 @@ def generate_new_audio():
else:
return
log_info("generating new audio...")
log_info("requesting new audio...")
try:
ws = websocket.create_connection(ws_url)
ws.send("generate")
wavs = []
for i in range(5):
raw = ws.recv()
if isinstance(raw, str):
continue
wavs.append(raw)
for i, wav in enumerate(wavs):
with open(f"{i + offset}.mp3", "wb") as f:
f.write(wav)
log_info("audio generated.")
ws.close()
requests.post(
f"{inference_url}/generate",
headers={"Authorization": f"key {api_key}"},
)
except:
log_warn(
"inference server potentially unreachable. recycling cached audio for now."
)
return
is_available = False
while not is_available:
try:
res = requests.post(
f"{inference_url}/clips/0",
stream=True,
headers={"Authorization": f"key {api_key}"},
)
except:
log_warn(
"inference server potentially unreachable. recycling cached audio for now."
)
return
if res.status_code != status.HTTP_200_OK:
print(res.status_code)
print("still generating...")
sleep(30)
continue
print("inference complete! downloading new clips")
is_available = True
with open(f"{offset}.mp3", "wb") as f:
for chunk in res.iter_content(chunk_size=128):
f.write(chunk)
for i in range(4):
res = requests.post(
f"{inference_url}/clips/{i + 1}",
stream=True,
headers={"Authorization": f"key {api_key}"},
)
if res.status_code != status.HTTP_200_OK:
continue
with open(f"{i + 1 + offset}.mp3", "wb") as f:
for chunk in res.iter_content(chunk_size=128):
f.write(chunk)
log_info("audio generated.")
def advance():
@@ -110,24 +145,24 @@ async def ws_endpoint(ws: WebSocket):
else:
await ws.close()
ws_connection_manager.disconnect(ws)
return
await ws_connection_manager.broadcast(f"{len(active_listeners)}")
await ws_connection_manager.broadcast(f"{listener_counter.count()}")
try:
while True:
msg = await ws.receive_text()
if msg == "playing":
active_listeners.add(addr)
await ws_connection_manager.broadcast(f"{len(active_listeners)}")
elif msg == "paused":
active_listeners.discard(addr)
await ws_connection_manager.broadcast(f"{len(active_listeners)}")
except WebSocketDisconnect:
active_listeners.discard(addr)
match msg:
case "listening":
listener_counter.add_listener(addr)
await ws_connection_manager.broadcast(f"{listener_counter.count()}")
case "paused":
listener_counter.remove_listener(addr)
await ws_connection_manager.broadcast(f"{listener_counter.count()}")
except:
listener_counter.remove_listener(addr)
ws_connection_manager.disconnect(ws)
await ws_connection_manager.broadcast(f"{len(active_listeners)}")
await ws_connection_manager.broadcast(f"{listener_counter.count()}")
app.mount("/", StaticFiles(directory="web", html=True), name="web")

View File

@@ -1,2 +1,2 @@
#!/bin/sh
uvicorn server:app --host 0.0.0.0 --port 443 --ssl-keyfile private.key.pem --ssl-certfile domain.cert.pem > server.log 2> server.err < /dev/null
INFERENCE_SERVER_URL=SERVER_URL API_KEY=FAL_API_KEY nohup uvicorn server:app --host 0.0.0.0 --port 443 --ssl-keyfile /path/to/ssl/private/key/file --ssl-certfile /path/to/ssl/cert/file > server.log 2> server.err < /dev/null &

166
tmp/server.py Normal file
View File

@@ -0,0 +1,166 @@
import threading
import os
from time import sleep
import requests
from contextlib import asynccontextmanager
from fastapi import (
FastAPI,
WebSocket,
status,
)
from fastapi.responses import FileResponse
from fastapi.staticfiles import StaticFiles
from listener_counter import ListenerCounter
from logger import log_info, log_warn
from websocket_connection_manager import WebSocketConnectionManager
# the index of the current audio track from 0 to 9
current_index = -1
# the timer that periodically advances the current audio track
t = None
inference_url = ""
api_key = ""
ws_connection_manager = WebSocketConnectionManager()
listener_counter = ListenerCounter()
@asynccontextmanager
async def lifespan(app: FastAPI):
global ws, inference_url, api_key
inference_url = os.environ.get("INFERENCE_SERVER_URL")
api_key = os.environ.get("API_KEY")
if not inference_url:
inference_url = "http://localhost:8001"
advance()
yield
if t:
t.cancel()
def generate_new_audio():
if not inference_url:
return
global current_index
offset = 0
if current_index == 0:
offset = 5
elif current_index == 5:
offset = 0
else:
return
log_info("requesting new audio...")
try:
requests.post(
f"{inference_url}/generate",
headers={"Authorization": f"key {api_key}"},
)
except:
log_warn(
"inference server potentially unreachable. recycling cached audio for now."
)
return
is_available = False
while not is_available:
try:
res = requests.post(
f"{inference_url}/clips/0",
stream=True,
headers={"Authorization": f"key {api_key}"},
)
except:
log_warn(
"inference server potentially unreachable. recycling cached audio for now."
)
return
if res.status_code != status.HTTP_200_OK:
print(res.status_code)
print("still generating...")
sleep(30)
continue
print("inference complete! downloading new clips")
is_available = True
with open(f"{offset}.mp3", "wb") as f:
for chunk in res.iter_content(chunk_size=128):
f.write(chunk)
for i in range(4):
res = requests.post(
f"{inference_url}/clips/{i + 1}",
stream=True,
headers={"Authorization": f"key {api_key}"},
)
if res.status_code != status.HTTP_200_OK:
continue
with open(f"{i + 1 + offset}.mp3", "wb") as f:
for chunk in res.iter_content(chunk_size=128):
f.write(chunk)
log_info("audio generated.")
def advance():
global current_index, t
if current_index == 9:
current_index = 0
else:
current_index = current_index + 1
threading.Thread(target=generate_new_audio).start()
t = threading.Timer(60, advance)
t.start()
app = FastAPI(lifespan=lifespan)
@app.get("/current.mp3")
def get_current_audio():
return FileResponse(f"{current_index}.mp3")
@app.websocket("/ws")
async def ws_endpoint(ws: WebSocket):
await ws_connection_manager.connect(ws)
addr = ""
if ws.client:
addr, _ = ws.client
else:
await ws.close()
ws_connection_manager.disconnect(ws)
return
try:
while True:
msg = await ws.receive_text()
match msg:
case "listening":
listener_counter.add_listener(addr)
await ws_connection_manager.broadcast(f"{listener_counter.count()}")
case "paused":
listener_counter.remove_listener(addr)
await ws_connection_manager.broadcast(f"{listener_counter.count()}")
except:
listener_counter.remove_listener(addr)
ws_connection_manager.disconnect(ws)
await ws_connection_manager.broadcast(f"{listener_counter.count()}")
app.mount("/", StaticFiles(directory="web", html=True), name="web")

BIN
web/images/cat.gif Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 287 B

View File

@@ -0,0 +1 @@
<svg width="100%" height="100%" viewBox="0 0 89 32" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M52.308 3.07812H57.8465V4.92428H56.0003V6.77043H54.1541V10.4627H57.8465V12.3089H54.1541V25.232H52.308V27.0781H46.7695V25.232H48.6157V12.3089H46.7695V10.4627H48.6157V6.77043H50.4618V4.92428H52.308V3.07812Z" fill="#cdd6f4"></path><path d="M79.3849 23.3858H81.2311V25.232H83.0772V27.0781H88.6157V25.232H86.7695V23.3858H84.9234V4.92428H79.3849V23.3858Z" fill="#cdd6f4"></path><path d="M57.8465 14.155H59.6926V12.3089H61.5388V10.4627H70.7695V12.3089H74.4618V23.3858H76.308V25.232H78.1541V27.0781H72.6157V25.232H70.7695V23.3858H68.9234V14.155H67.0772V12.3089H65.2311V14.155H63.3849V23.3858H65.2311V25.232H67.0772V27.0781H61.5388V25.232H59.6926V23.3858H57.8465V14.155Z" fill="#cdd6f4"></path><path d="M67.0772 25.232V23.3858H68.9234V25.232H67.0772Z" fill="#cdd6f4"></path><rect opacity="0.22" x="7.38477" y="29.5391" width="2.46154" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.85" x="2.46094" y="19.6914" width="12.3077" height="2.46154" fill="#5F4CD9"></rect><rect x="4.92383" y="17.2305" width="9.84615" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.4" x="7.38477" y="27.0781" width="4.92308" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.7" y="22.1562" width="14.7692" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.5" x="7.38477" y="24.6133" width="7.38462" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.22" x="7.38477" y="12.3086" width="2.46154" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.85" x="2.46094" y="2.46094" width="12.3077" height="2.46154" fill="#5F4CD9"></rect><rect x="4.92383" width="9.84615" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.4" x="7.38477" y="9.84375" width="4.92308" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.7" y="4.92188" width="14.7692" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.5" x="7.38477" y="7.38281" width="7.38462" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.22" x="24.6152" y="29.5391" width="2.46154" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.85" x="19.6914" y="19.6914" width="12.3077" height="2.46154" fill="#5F4CD9"></rect><rect x="22.1543" y="17.2305" width="9.84615" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.4" x="24.6152" y="27.0781" width="4.92308" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.7" x="17.2305" y="22.1562" width="14.7692" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.5" x="24.6152" y="24.6133" width="7.38462" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.22" x="24.6152" y="12.3086" width="2.46154" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.85" x="19.6914" y="2.46094" width="12.3077" height="2.46154" fill="#5F4CD9"></rect><rect x="22.1543" width="9.84615" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.4" x="24.6152" y="9.84375" width="4.92308" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.7" x="17.2305" y="4.92188" width="14.7692" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.5" x="24.6152" y="7.38281" width="7.38462" height="2.46154" fill="#5F4CD9"></rect></svg>

After

Width:  |  Height:  |  Size: 3.1 KiB

View File

@@ -0,0 +1 @@
<svg width="100%" height="100%" viewBox="0 0 89 32" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M52.308 3.07812H57.8465V4.92428H56.0003V6.77043H54.1541V10.4627H57.8465V12.3089H54.1541V25.232H52.308V27.0781H46.7695V25.232H48.6157V12.3089H46.7695V10.4627H48.6157V6.77043H50.4618V4.92428H52.308V3.07812Z" fill="currentColor"></path><path d="M79.3849 23.3858H81.2311V25.232H83.0772V27.0781H88.6157V25.232H86.7695V23.3858H84.9234V4.92428H79.3849V23.3858Z" fill="currentColor"></path><path d="M57.8465 14.155H59.6926V12.3089H61.5388V10.4627H70.7695V12.3089H74.4618V23.3858H76.308V25.232H78.1541V27.0781H72.6157V25.232H70.7695V23.3858H68.9234V14.155H67.0772V12.3089H65.2311V14.155H63.3849V23.3858H65.2311V25.232H67.0772V27.0781H61.5388V25.232H59.6926V23.3858H57.8465V14.155Z" fill="currentColor"></path><path d="M67.0772 25.232V23.3858H68.9234V25.232H67.0772Z" fill="currentColor"></path><rect opacity="0.22" x="7.38477" y="29.5391" width="2.46154" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.85" x="2.46094" y="19.6914" width="12.3077" height="2.46154" fill="#5F4CD9"></rect><rect x="4.92383" y="17.2305" width="9.84615" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.4" x="7.38477" y="27.0781" width="4.92308" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.7" y="22.1562" width="14.7692" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.5" x="7.38477" y="24.6133" width="7.38462" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.22" x="7.38477" y="12.3086" width="2.46154" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.85" x="2.46094" y="2.46094" width="12.3077" height="2.46154" fill="#5F4CD9"></rect><rect x="4.92383" width="9.84615" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.4" x="7.38477" y="9.84375" width="4.92308" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.7" y="4.92188" width="14.7692" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.5" x="7.38477" y="7.38281" width="7.38462" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.22" x="24.6152" y="29.5391" width="2.46154" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.85" x="19.6914" y="19.6914" width="12.3077" height="2.46154" fill="#5F4CD9"></rect><rect x="22.1543" y="17.2305" width="9.84615" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.4" x="24.6152" y="27.0781" width="4.92308" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.7" x="17.2305" y="22.1562" width="14.7692" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.5" x="24.6152" y="24.6133" width="7.38462" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.22" x="24.6152" y="12.3086" width="2.46154" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.85" x="19.6914" y="2.46094" width="12.3077" height="2.46154" fill="#5F4CD9"></rect><rect x="22.1543" width="9.84615" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.4" x="24.6152" y="9.84375" width="4.92308" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.7" x="17.2305" y="4.92188" width="14.7692" height="2.46154" fill="#5F4CD9"></rect><rect opacity="0.5" x="24.6152" y="7.38281" width="7.38462" height="2.46154" fill="#5F4CD9"></rect></svg>

After

Width:  |  Height:  |  Size: 3.1 KiB

View File

@@ -45,13 +45,21 @@
<p id="notification-body">make milo meow 100 times</p>
</aside>
<img id="cat" src="./images/cat-0.png">
<img id="cat" src="./images/cat.gif">
<img id="eeping-cat" src="./images/eeping-cat.png">
<img id="heart" src="./images/heart.png">
</div>
<footer>
<span>made by kennethnym &lt;3 ·&nbsp;</span>
made by&nbsp<a href="https://kennethnym.com" target="_blank">kennethnym</a>&nbsp;&lt;3 · powered by
<a class="fal-link" href="https://fal.ai" target="_blank">
<picture>
<source srcset="./images/fal-logo-light.svg" media="(prefers-color-scheme: light)" />
<source srcset="./images/fal-logo-dark.svg" media="(prefers-color-scheme: dark)" />
<img class="fal-logo" fill="none" src="./images/fal-logo-dark.svg">
</picture>
</a>
·
<a target="_blank" rel="noopener noreferrer" href="https://github.com/kennethnym/infinifi">github</a>
</footer>

View File

@@ -22,6 +22,8 @@ const achievementUnlockedAudio = document.getElementById(
"achievement-unlocked-audio",
);
const ws = initializeWebSocket();
let isPlaying = false;
let isFading = false;
let currentAudio;
@@ -29,7 +31,6 @@ let maxVolume = 100;
let currentVolume = 0;
let saveVolumeTimeout = null;
let meowCount = 0;
let ws = connectToWebSocket();
function playAudio() {
// add a random query parameter at the end to prevent browser caching
@@ -37,17 +38,13 @@ function playAudio() {
currentAudio.onplay = () => {
isPlaying = true;
playBtn.innerText = "pause";
if (ws) {
ws.send("playing");
}
updateClientStatus({ isListening: true });
};
currentAudio.onpause = () => {
isPlaying = false;
currentVolume = 0;
playBtn.innerText = "play";
if (ws) {
ws.send("paused");
}
updateClientStatus({ isListening: false });
};
currentAudio.onended = () => {
currentVolume = 0;
@@ -107,67 +104,31 @@ function fadeOut() {
}, CROSSFADE_INTERVAL_MS);
}
function animateCat() {
let current = 0;
setInterval(() => {
if (current === 3) {
current = 0;
} else {
current += 1;
}
catImg.src = `./images/cat-${current}.png`;
}, 500);
}
/**
* Allow audio to be played/paused using the space bar
*/
function enableSpaceBarControl() {
let isDown = false;
document.addEventListener("keydown", (event) => {
if (isDown) return;
if (event.code === "Space") {
isDown = true;
playBtn.classList.add("button-active");
playBtn.dispatchEvent(new MouseEvent("mousedown"));
clickAudio.play();
}
});
document.addEventListener("keyup", (event) => {
isDown = false;
if (event.code === "Space") {
playBtn.classList.remove("button-active");
clickReleaseAudio.play();
if (isPlaying) {
pauseAudio();
} else {
playAudio();
}
}
});
document.addEventListener("keypress", (event) => {
if (event.code === "Space") {
playBtn.click();
}
});
}
function connectToWebSocket() {
const ws = new WebSocket(
`${location.protocol === "https:" ? "wss:" : "ws:"}//${location.host}/ws`,
);
ws.onmessage = (event) => {
console.log(event.data);
if (typeof event.data !== "string") {
return;
}
const listenerCountStr = event.data;
const listenerCount = Number.parseInt(listenerCountStr);
if (Number.isNaN(listenerCount)) {
return;
}
if (listenerCount <= 1) {
listenerCountLabel.innerText = `${listenerCount} person tuned in`;
} else {
listenerCountLabel.innerText = `${listenerCount} ppl tuned in`;
}
};
return ws;
}
function changeVolume(volume) {
@@ -229,6 +190,47 @@ function showNotification(title, content, duration) {
}, duration);
}
function updateListenerCountLabel(newCount) {
if (newCount <= 1) {
listenerCountLabel.innerText = `${newCount} person tuned in`;
} else {
listenerCountLabel.innerText = `${newCount} ppl tuned in`;
}
}
async function updateClientStatus(status) {
if (status.isListening) {
ws.send("listening");
} else {
ws.send("paused");
}
}
function initializeWebSocket() {
const ws = new WebSocket(
`${location.protocol === "https:" ? "wss:" : "ws:"}//${location.host}/ws`,
);
ws.onmessage = (event) => {
if (typeof event.data !== "string") {
return;
}
const listenerCount = Number.parseInt(event.data);
if (Number.isNaN(listenerCount)) {
return;
}
updateListenerCountLabel(listenerCount);
};
return ws;
}
window.addEventListener("beforeunload", (e) => {
updateClientStatus({ isListening: false });
});
playBtn.onmousedown = () => {
clickAudio.play();
document.addEventListener(
@@ -288,15 +290,6 @@ meowAudio.onplay = () => {
// don't wanna jumpscare ppl
achievementUnlockedAudio.volume = 0.05;
window.addEventListener("offline", () => {
ws = null;
});
window.addEventListener("online", () => {
ws = connectToWebSocket();
});
loadMeowCount();
loadInitialVolume();
animateCat();
enableSpaceBarControl();

View File

@@ -72,9 +72,12 @@ main {
footer {
padding: 1rem 0;
display: flex;
align-items: center;
justify-content: center;
color: var(--text);
text-align: center;
text-wrap: nowrap;
overflow: auto;
}
@media (min-width: 768px) {
footer {
@@ -82,11 +85,6 @@ footer {
}
}
footer > span {
color: var(--text);
opacity: 0.5;
}
a {
color: var(--link-accent);
}
@@ -378,3 +376,12 @@ aside#notification > #notification-title {
opacity: 0;
}
}
.fal-logo {
vertical-align: bottom;
height: 1rem;
}
.fal-link {
text-decoration: none;
}